00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1034 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3701 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.890 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.900 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.912 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.912 > git config core.sparsecheckout # timeout=10 00:00:06.923 > git read-tree -mu HEAD # timeout=10 00:00:06.937 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.960 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.960 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.113 [Pipeline] Start of Pipeline 00:00:07.132 [Pipeline] library 00:00:07.135 Loading library shm_lib@master 00:00:07.135 Library shm_lib@master is cached. Copying from home. 00:00:07.151 [Pipeline] node 00:00:07.159 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:07.161 [Pipeline] { 00:00:07.172 [Pipeline] catchError 00:00:07.174 [Pipeline] { 00:00:07.187 [Pipeline] wrap 00:00:07.196 [Pipeline] { 00:00:07.205 [Pipeline] stage 00:00:07.207 [Pipeline] { (Prologue) 00:00:07.227 [Pipeline] echo 00:00:07.229 Node: VM-host-SM17 00:00:07.236 [Pipeline] cleanWs 00:00:07.246 [WS-CLEANUP] Deleting project workspace... 00:00:07.246 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.252 [WS-CLEANUP] done 00:00:07.533 [Pipeline] setCustomBuildProperty 00:00:07.622 [Pipeline] httpRequest 00:00:08.573 [Pipeline] echo 00:00:08.575 Sorcerer 10.211.164.20 is alive 00:00:08.587 [Pipeline] retry 00:00:08.590 [Pipeline] { 00:00:08.606 [Pipeline] httpRequest 00:00:08.611 HttpMethod: GET 00:00:08.612 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.612 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.613 Response Code: HTTP/1.1 200 OK 00:00:08.614 Success: Status code 200 is in the accepted range: 200,404 00:00:08.614 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.672 [Pipeline] } 00:00:09.695 [Pipeline] // retry 00:00:09.703 [Pipeline] sh 00:00:09.985 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.002 [Pipeline] httpRequest 00:00:10.383 [Pipeline] echo 00:00:10.385 Sorcerer 10.211.164.20 is alive 00:00:10.398 [Pipeline] retry 00:00:10.400 [Pipeline] { 00:00:10.418 [Pipeline] httpRequest 00:00:10.424 HttpMethod: GET 00:00:10.425 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.426 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:10.427 Response Code: HTTP/1.1 200 OK 00:00:10.427 Success: Status code 200 is in the accepted range: 200,404 00:00:10.428 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:25.981 [Pipeline] } 00:00:25.999 [Pipeline] // retry 00:00:26.008 [Pipeline] sh 00:00:26.290 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:29.646 [Pipeline] sh 00:00:29.926 + git -C spdk log --oneline -n5 00:00:29.926 c13c99a5e test: Various fixes for Fedora40 00:00:29.926 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:29.926 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:29.926 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:29.926 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:29.947 [Pipeline] withCredentials 00:00:29.957 > git --version # timeout=10 00:00:29.971 > git --version # 'git version 2.39.2' 00:00:29.985 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.987 [Pipeline] { 00:00:29.997 [Pipeline] retry 00:00:29.999 [Pipeline] { 00:00:30.016 [Pipeline] sh 00:00:30.297 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:30.309 [Pipeline] } 00:00:30.325 [Pipeline] // retry 00:00:30.331 [Pipeline] } 00:00:30.345 [Pipeline] // withCredentials 00:00:30.356 [Pipeline] httpRequest 00:00:31.512 [Pipeline] echo 00:00:31.513 Sorcerer 10.211.164.20 is alive 00:00:31.522 [Pipeline] retry 00:00:31.523 [Pipeline] { 00:00:31.535 [Pipeline] httpRequest 00:00:31.540 HttpMethod: GET 00:00:31.540 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.541 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:31.551 Response Code: HTTP/1.1 200 OK 00:00:31.552 Success: Status code 200 is in the accepted range: 200,404 00:00:31.552 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.040 [Pipeline] } 00:01:13.057 [Pipeline] // retry 00:01:13.065 [Pipeline] sh 00:01:13.342 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.259 [Pipeline] sh 00:01:15.538 + git -C dpdk log --oneline -n5 00:01:15.538 eeb0605f11 version: 23.11.0 00:01:15.538 238778122a doc: update release notes for 23.11 00:01:15.538 46aa6b3cfc doc: fix description of RSS features 00:01:15.538 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:15.538 7e421ae345 devtools: support skipping forbid rule check 00:01:15.556 [Pipeline] writeFile 00:01:15.571 [Pipeline] sh 00:01:15.973 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:15.985 [Pipeline] sh 00:01:16.266 + cat autorun-spdk.conf 00:01:16.266 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.266 SPDK_TEST_NVMF=1 00:01:16.266 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.266 SPDK_TEST_URING=1 00:01:16.266 SPDK_TEST_USDT=1 00:01:16.266 SPDK_RUN_UBSAN=1 00:01:16.266 NET_TYPE=virt 00:01:16.266 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:16.266 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.266 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.274 RUN_NIGHTLY=1 00:01:16.276 [Pipeline] } 00:01:16.289 [Pipeline] // stage 00:01:16.305 [Pipeline] stage 00:01:16.307 [Pipeline] { (Run VM) 00:01:16.323 [Pipeline] sh 00:01:16.608 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:16.608 + echo 'Start stage prepare_nvme.sh' 00:01:16.608 Start stage prepare_nvme.sh 00:01:16.608 + [[ -n 1 ]] 00:01:16.608 + disk_prefix=ex1 00:01:16.608 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:01:16.608 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:01:16.608 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:01:16.608 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.608 ++ SPDK_TEST_NVMF=1 00:01:16.608 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.608 ++ SPDK_TEST_URING=1 00:01:16.608 ++ SPDK_TEST_USDT=1 00:01:16.608 ++ SPDK_RUN_UBSAN=1 00:01:16.608 ++ NET_TYPE=virt 00:01:16.608 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:16.608 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.608 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.608 ++ RUN_NIGHTLY=1 00:01:16.608 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:16.608 + nvme_files=() 00:01:16.608 + declare -A nvme_files 00:01:16.608 + backend_dir=/var/lib/libvirt/images/backends 00:01:16.608 + nvme_files['nvme.img']=5G 00:01:16.608 + nvme_files['nvme-cmb.img']=5G 00:01:16.608 + nvme_files['nvme-multi0.img']=4G 00:01:16.608 + nvme_files['nvme-multi1.img']=4G 00:01:16.608 + nvme_files['nvme-multi2.img']=4G 00:01:16.608 + nvme_files['nvme-openstack.img']=8G 00:01:16.608 + nvme_files['nvme-zns.img']=5G 00:01:16.608 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:16.608 + (( SPDK_TEST_FTL == 1 )) 00:01:16.608 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:16.608 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:16.608 + for nvme in "${!nvme_files[@]}" 00:01:16.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:16.608 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.608 + for nvme in "${!nvme_files[@]}" 00:01:16.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:16.608 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.608 + for nvme in "${!nvme_files[@]}" 00:01:16.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:16.608 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:16.608 + for nvme in "${!nvme_files[@]}" 00:01:16.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:16.608 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.608 + for nvme in "${!nvme_files[@]}" 00:01:16.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:16.867 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.867 + for nvme in "${!nvme_files[@]}" 00:01:16.867 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:16.867 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:16.867 + for nvme in "${!nvme_files[@]}" 00:01:16.867 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:16.867 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:16.867 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:16.867 + echo 'End stage prepare_nvme.sh' 00:01:16.867 End stage prepare_nvme.sh 00:01:16.878 [Pipeline] sh 00:01:17.159 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.159 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:17.159 00:01:17.159 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:17.159 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:17.159 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:17.159 HELP=0 00:01:17.159 DRY_RUN=0 00:01:17.159 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:17.159 NVME_DISKS_TYPE=nvme,nvme, 00:01:17.159 NVME_AUTO_CREATE=0 00:01:17.159 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:17.159 NVME_CMB=,, 00:01:17.159 NVME_PMR=,, 00:01:17.159 NVME_ZNS=,, 00:01:17.159 NVME_MS=,, 00:01:17.159 NVME_FDP=,, 00:01:17.159 SPDK_VAGRANT_DISTRO=fedora39 00:01:17.159 SPDK_VAGRANT_VMCPU=10 00:01:17.159 SPDK_VAGRANT_VMRAM=12288 00:01:17.159 SPDK_VAGRANT_PROVIDER=libvirt 00:01:17.159 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:17.159 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:17.159 SPDK_OPENSTACK_NETWORK=0 00:01:17.159 VAGRANT_PACKAGE_BOX=0 00:01:17.159 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:17.159 FORCE_DISTRO=true 00:01:17.159 VAGRANT_BOX_VERSION= 00:01:17.159 EXTRA_VAGRANTFILES= 00:01:17.159 NIC_MODEL=e1000 00:01:17.159 00:01:17.159 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:01:17.159 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:20.444 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.012 ==> default: Creating image (snapshot of base box volume). 00:01:21.012 ==> default: Creating domain with the following settings... 00:01:21.012 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733457873_26cfa7ab7df8e1302c85 00:01:21.012 ==> default: -- Domain type: kvm 00:01:21.012 ==> default: -- Cpus: 10 00:01:21.012 ==> default: -- Feature: acpi 00:01:21.012 ==> default: -- Feature: apic 00:01:21.012 ==> default: -- Feature: pae 00:01:21.012 ==> default: -- Memory: 12288M 00:01:21.012 ==> default: -- Memory Backing: hugepages: 00:01:21.012 ==> default: -- Management MAC: 00:01:21.012 ==> default: -- Loader: 00:01:21.012 ==> default: -- Nvram: 00:01:21.012 ==> default: -- Base box: spdk/fedora39 00:01:21.012 ==> default: -- Storage pool: default 00:01:21.012 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733457873_26cfa7ab7df8e1302c85.img (20G) 00:01:21.012 ==> default: -- Volume Cache: default 00:01:21.012 ==> default: -- Kernel: 00:01:21.012 ==> default: -- Initrd: 00:01:21.012 ==> default: -- Graphics Type: vnc 00:01:21.012 ==> default: -- Graphics Port: -1 00:01:21.012 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.012 ==> default: -- Graphics Password: Not defined 00:01:21.012 ==> default: -- Video Type: cirrus 00:01:21.012 ==> default: -- Video VRAM: 9216 00:01:21.012 ==> default: -- Sound Type: 00:01:21.012 ==> default: -- Keymap: en-us 00:01:21.012 ==> default: -- TPM Path: 00:01:21.012 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.012 ==> default: -- Command line args: 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:21.012 ==> default: -> value=-drive, 00:01:21.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:21.012 ==> default: -> value=-drive, 00:01:21.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.012 ==> default: -> value=-drive, 00:01:21.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.012 ==> default: -> value=-drive, 00:01:21.012 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:21.012 ==> default: -> value=-device, 00:01:21.012 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.280 ==> default: Creating shared folders metadata... 00:01:21.280 ==> default: Starting domain. 00:01:23.180 ==> default: Waiting for domain to get an IP address... 00:01:38.054 ==> default: Waiting for SSH to become available... 00:01:39.432 ==> default: Configuring and enabling network interfaces... 00:01:43.622 default: SSH address: 192.168.121.240:22 00:01:43.622 default: SSH username: vagrant 00:01:43.622 default: SSH auth method: private key 00:01:45.586 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.699 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:58.968 ==> default: Mounting SSHFS shared folder... 00:01:59.905 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:59.905 ==> default: Checking Mount.. 00:02:01.355 ==> default: Folder Successfully Mounted! 00:02:01.355 ==> default: Running provisioner: file... 00:02:01.922 default: ~/.gitconfig => .gitconfig 00:02:02.489 00:02:02.489 SUCCESS! 00:02:02.489 00:02:02.489 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:02.489 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:02.489 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:02.489 00:02:02.498 [Pipeline] } 00:02:02.513 [Pipeline] // stage 00:02:02.520 [Pipeline] dir 00:02:02.521 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:02:02.523 [Pipeline] { 00:02:02.534 [Pipeline] catchError 00:02:02.535 [Pipeline] { 00:02:02.546 [Pipeline] sh 00:02:02.824 + vagrant ssh-config --host vagrant 00:02:02.824 + sed -ne /^Host/,$p 00:02:02.825 + tee ssh_conf 00:02:07.008 Host vagrant 00:02:07.008 HostName 192.168.121.240 00:02:07.008 User vagrant 00:02:07.008 Port 22 00:02:07.008 UserKnownHostsFile /dev/null 00:02:07.008 StrictHostKeyChecking no 00:02:07.008 PasswordAuthentication no 00:02:07.008 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.008 IdentitiesOnly yes 00:02:07.008 LogLevel FATAL 00:02:07.008 ForwardAgent yes 00:02:07.008 ForwardX11 yes 00:02:07.008 00:02:07.021 [Pipeline] withEnv 00:02:07.022 [Pipeline] { 00:02:07.034 [Pipeline] sh 00:02:07.313 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:07.313 source /etc/os-release 00:02:07.313 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.313 # Minimal, systemd-like check. 00:02:07.313 if [[ -e /.dockerenv ]]; then 00:02:07.313 # Clear garbage from the node's name: 00:02:07.313 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.313 # $HOSTNAME is the actual container id 00:02:07.313 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.313 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.313 # We can assume this is a mount from a host where container is running, 00:02:07.313 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.313 container="$(< /etc/hostname) ($agent)" 00:02:07.313 else 00:02:07.313 # Fallback 00:02:07.313 container=$agent 00:02:07.313 fi 00:02:07.313 fi 00:02:07.313 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.313 00:02:07.582 [Pipeline] } 00:02:07.596 [Pipeline] // withEnv 00:02:07.604 [Pipeline] setCustomBuildProperty 00:02:07.618 [Pipeline] stage 00:02:07.621 [Pipeline] { (Tests) 00:02:07.640 [Pipeline] sh 00:02:07.920 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.935 [Pipeline] sh 00:02:08.215 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:08.230 [Pipeline] timeout 00:02:08.230 Timeout set to expire in 1 hr 0 min 00:02:08.232 [Pipeline] { 00:02:08.247 [Pipeline] sh 00:02:08.525 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:09.090 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:09.101 [Pipeline] sh 00:02:09.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.650 [Pipeline] sh 00:02:09.931 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:09.947 [Pipeline] sh 00:02:10.252 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:10.253 ++ readlink -f spdk_repo 00:02:10.253 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:10.253 + [[ -n /home/vagrant/spdk_repo ]] 00:02:10.253 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:10.253 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:10.253 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:10.253 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:10.253 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:10.253 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:10.253 + cd /home/vagrant/spdk_repo 00:02:10.253 + source /etc/os-release 00:02:10.253 ++ NAME='Fedora Linux' 00:02:10.253 ++ VERSION='39 (Cloud Edition)' 00:02:10.253 ++ ID=fedora 00:02:10.253 ++ VERSION_ID=39 00:02:10.253 ++ VERSION_CODENAME= 00:02:10.253 ++ PLATFORM_ID=platform:f39 00:02:10.253 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.253 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.253 ++ LOGO=fedora-logo-icon 00:02:10.253 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.253 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.253 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.253 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.253 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.253 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.253 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.253 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.253 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.253 ++ SUPPORT_END=2024-11-12 00:02:10.253 ++ VARIANT='Cloud Edition' 00:02:10.253 ++ VARIANT_ID=cloud 00:02:10.253 + uname -a 00:02:10.253 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.253 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:10.511 Hugepages 00:02:10.511 node hugesize free / total 00:02:10.511 node0 1048576kB 0 / 0 00:02:10.511 node0 2048kB 0 / 0 00:02:10.511 00:02:10.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.511 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:10.511 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:10.511 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:10.511 + rm -f /tmp/spdk-ld-path 00:02:10.511 + source autorun-spdk.conf 00:02:10.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.511 ++ SPDK_TEST_NVMF=1 00:02:10.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.511 ++ SPDK_TEST_URING=1 00:02:10.511 ++ SPDK_TEST_USDT=1 00:02:10.511 ++ SPDK_RUN_UBSAN=1 00:02:10.511 ++ NET_TYPE=virt 00:02:10.511 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.511 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.511 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.511 ++ RUN_NIGHTLY=1 00:02:10.511 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.511 + [[ -n '' ]] 00:02:10.511 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:10.511 + for M in /var/spdk/build-*-manifest.txt 00:02:10.511 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.511 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.511 + for M in /var/spdk/build-*-manifest.txt 00:02:10.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.511 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.511 + for M in /var/spdk/build-*-manifest.txt 00:02:10.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.511 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:10.771 ++ uname 00:02:10.771 + [[ Linux == \L\i\n\u\x ]] 00:02:10.771 + sudo dmesg -T 00:02:10.771 + sudo dmesg --clear 00:02:10.771 + dmesg_pid=5912 00:02:10.771 + [[ Fedora Linux == FreeBSD ]] 00:02:10.771 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.771 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.771 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.771 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.771 + sudo dmesg -Tw 00:02:10.771 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.771 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.771 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.771 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.771 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.771 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.771 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.771 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.771 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.771 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.771 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.771 Test configuration: 00:02:10.771 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.771 SPDK_TEST_NVMF=1 00:02:10.771 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.771 SPDK_TEST_URING=1 00:02:10.771 SPDK_TEST_USDT=1 00:02:10.771 SPDK_RUN_UBSAN=1 00:02:10.771 NET_TYPE=virt 00:02:10.771 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:10.771 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:10.771 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.771 RUN_NIGHTLY=1 04:05:23 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:10.771 04:05:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.771 04:05:23 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.771 04:05:23 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.771 04:05:23 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.771 04:05:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.771 04:05:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.771 04:05:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.771 04:05:23 -- paths/export.sh@5 -- $ export PATH 00:02:10.771 04:05:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.771 04:05:23 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.771 04:05:23 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:10.771 04:05:23 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733457923.XXXXXX 00:02:10.771 04:05:23 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733457923.dOlNxQ 00:02:10.771 04:05:23 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:10.771 04:05:23 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:10.771 04:05:23 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:10.771 04:05:23 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:10.771 04:05:23 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.772 04:05:23 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.772 04:05:23 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:10.772 04:05:23 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:10.772 04:05:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.772 04:05:23 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:10.772 04:05:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.772 04:05:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.772 04:05:23 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.772 04:05:23 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.772 Fri Dec 6 04:05:23 AM UTC 2024 00:02:10.772 04:05:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.772 LTS-67-gc13c99a5e 00:02:10.772 04:05:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.772 04:05:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.772 04:05:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.772 04:05:23 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:10.772 04:05:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:10.772 04:05:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.772 ************************************ 00:02:10.772 START TEST ubsan 00:02:10.772 ************************************ 00:02:10.772 using ubsan 00:02:10.772 04:05:23 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:10.772 00:02:10.772 real 0m0.000s 00:02:10.772 user 0m0.000s 00:02:10.772 sys 0m0.000s 00:02:10.772 04:05:23 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:10.772 ************************************ 00:02:10.772 END TEST ubsan 00:02:10.772 ************************************ 00:02:10.772 04:05:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.772 04:05:23 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:10.772 04:05:23 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:10.772 04:05:23 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:10.772 04:05:23 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:10.772 04:05:23 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:10.772 04:05:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.772 ************************************ 00:02:10.772 START TEST build_native_dpdk 00:02:10.772 ************************************ 00:02:10.772 04:05:23 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:10.772 04:05:23 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:10.772 04:05:23 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:10.772 04:05:23 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:10.772 04:05:23 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:10.772 04:05:23 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:10.772 04:05:23 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:10.772 04:05:23 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:10.772 04:05:23 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:10.772 04:05:23 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:10.772 04:05:23 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:10.772 04:05:23 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:10.772 04:05:23 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:11.031 04:05:23 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:11.031 04:05:23 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:11.031 04:05:23 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:11.031 04:05:23 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:11.031 04:05:23 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:11.031 04:05:23 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:11.031 04:05:23 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:11.031 eeb0605f11 version: 23.11.0 00:02:11.031 238778122a doc: update release notes for 23.11 00:02:11.031 46aa6b3cfc doc: fix description of RSS features 00:02:11.031 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:11.031 7e421ae345 devtools: support skipping forbid rule check 00:02:11.031 04:05:23 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:11.031 04:05:23 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:11.031 04:05:23 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:11.031 04:05:23 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:11.031 04:05:23 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:11.031 04:05:23 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:11.031 04:05:23 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:11.031 04:05:23 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:11.031 04:05:23 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:11.031 04:05:23 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:11.031 04:05:23 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:11.031 04:05:23 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:11.031 04:05:23 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:11.031 04:05:23 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:11.031 04:05:23 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:11.031 04:05:23 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:11.031 04:05:23 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:11.031 04:05:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.031 04:05:23 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:11.031 04:05:23 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:11.031 04:05:23 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:11.031 04:05:23 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:11.031 04:05:23 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:11.031 04:05:23 -- scripts/common.sh@343 -- $ case "$op" in 00:02:11.031 04:05:23 -- scripts/common.sh@344 -- $ : 1 00:02:11.031 04:05:23 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:11.031 04:05:23 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.031 04:05:23 -- scripts/common.sh@364 -- $ decimal 23 00:02:11.031 04:05:23 -- scripts/common.sh@352 -- $ local d=23 00:02:11.031 04:05:23 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.031 04:05:23 -- scripts/common.sh@354 -- $ echo 23 00:02:11.031 04:05:23 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:11.031 04:05:23 -- scripts/common.sh@365 -- $ decimal 21 00:02:11.031 04:05:23 -- scripts/common.sh@352 -- $ local d=21 00:02:11.031 04:05:23 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:11.031 04:05:23 -- scripts/common.sh@354 -- $ echo 21 00:02:11.031 04:05:23 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:11.031 04:05:23 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:11.031 04:05:23 -- scripts/common.sh@366 -- $ return 1 00:02:11.031 04:05:23 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:11.031 patching file config/rte_config.h 00:02:11.031 Hunk #1 succeeded at 60 (offset 1 line). 00:02:11.031 04:05:23 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:11.031 04:05:23 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:11.031 04:05:23 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:11.031 04:05:23 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:11.031 04:05:23 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:11.031 04:05:23 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:11.031 04:05:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.031 04:05:23 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:11.031 04:05:23 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:11.031 04:05:23 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:11.031 04:05:23 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:11.031 04:05:23 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:11.031 04:05:23 -- scripts/common.sh@343 -- $ case "$op" in 00:02:11.031 04:05:23 -- scripts/common.sh@344 -- $ : 1 00:02:11.031 04:05:23 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:11.031 04:05:23 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.031 04:05:23 -- scripts/common.sh@364 -- $ decimal 23 00:02:11.031 04:05:23 -- scripts/common.sh@352 -- $ local d=23 00:02:11.031 04:05:23 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:11.031 04:05:23 -- scripts/common.sh@354 -- $ echo 23 00:02:11.031 04:05:23 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:11.031 04:05:23 -- scripts/common.sh@365 -- $ decimal 24 00:02:11.031 04:05:23 -- scripts/common.sh@352 -- $ local d=24 00:02:11.031 04:05:23 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.031 04:05:23 -- scripts/common.sh@354 -- $ echo 24 00:02:11.031 04:05:23 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:11.031 04:05:23 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:11.031 04:05:23 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:11.031 04:05:23 -- scripts/common.sh@367 -- $ return 0 00:02:11.031 04:05:23 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:11.031 patching file lib/pcapng/rte_pcapng.c 00:02:11.031 04:05:23 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:11.031 04:05:23 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:11.031 04:05:23 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:11.031 04:05:23 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:11.031 04:05:23 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:16.298 The Meson build system 00:02:16.298 Version: 1.5.0 00:02:16.298 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:16.298 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:16.298 Build type: native build 00:02:16.298 Program cat found: YES (/usr/bin/cat) 00:02:16.298 Project name: DPDK 00:02:16.298 Project version: 23.11.0 00:02:16.298 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.298 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:16.298 Host machine cpu family: x86_64 00:02:16.298 Host machine cpu: x86_64 00:02:16.298 Message: ## Building in Developer Mode ## 00:02:16.298 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.298 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:16.298 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.298 Program python3 found: YES (/usr/bin/python3) 00:02:16.298 Program cat found: YES (/usr/bin/cat) 00:02:16.298 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:16.298 Compiler for C supports arguments -march=native: YES 00:02:16.298 Checking for size of "void *" : 8 00:02:16.298 Checking for size of "void *" : 8 (cached) 00:02:16.298 Library m found: YES 00:02:16.298 Library numa found: YES 00:02:16.298 Has header "numaif.h" : YES 00:02:16.298 Library fdt found: NO 00:02:16.298 Library execinfo found: NO 00:02:16.298 Has header "execinfo.h" : YES 00:02:16.298 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.298 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.298 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.298 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.298 Run-time dependency openssl found: YES 3.1.1 00:02:16.298 Run-time dependency libpcap found: YES 1.10.4 00:02:16.298 Has header "pcap.h" with dependency libpcap: YES 00:02:16.298 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.298 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.298 Compiler for C supports arguments -Wformat: YES 00:02:16.298 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.298 Compiler for C supports arguments -Wformat-security: NO 00:02:16.298 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.298 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.298 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.298 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.298 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.298 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.298 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.298 Compiler for C supports arguments -Wundef: YES 00:02:16.298 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.298 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.298 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.298 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.298 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:16.298 Program objdump found: YES (/usr/bin/objdump) 00:02:16.298 Compiler for C supports arguments -mavx512f: YES 00:02:16.298 Checking if "AVX512 checking" compiles: YES 00:02:16.298 Fetching value of define "__SSE4_2__" : 1 00:02:16.298 Fetching value of define "__AES__" : 1 00:02:16.298 Fetching value of define "__AVX__" : 1 00:02:16.298 Fetching value of define "__AVX2__" : 1 00:02:16.298 Fetching value of define "__AVX512BW__" : (undefined) 00:02:16.298 Fetching value of define "__AVX512CD__" : (undefined) 00:02:16.298 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:16.298 Fetching value of define "__AVX512F__" : (undefined) 00:02:16.298 Fetching value of define "__AVX512VL__" : (undefined) 00:02:16.298 Fetching value of define "__PCLMUL__" : 1 00:02:16.298 Fetching value of define "__RDRND__" : 1 00:02:16.298 Fetching value of define "__RDSEED__" : 1 00:02:16.298 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:16.298 Fetching value of define "__znver1__" : (undefined) 00:02:16.298 Fetching value of define "__znver2__" : (undefined) 00:02:16.298 Fetching value of define "__znver3__" : (undefined) 00:02:16.298 Fetching value of define "__znver4__" : (undefined) 00:02:16.298 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.298 Message: lib/log: Defining dependency "log" 00:02:16.298 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.298 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.298 Checking for function "getentropy" : NO 00:02:16.298 Message: lib/eal: Defining dependency "eal" 00:02:16.298 Message: lib/ring: Defining dependency "ring" 00:02:16.298 Message: lib/rcu: Defining dependency "rcu" 00:02:16.298 Message: lib/mempool: Defining dependency "mempool" 00:02:16.298 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.298 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.298 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.298 Compiler for C supports arguments -mpclmul: YES 00:02:16.298 Compiler for C supports arguments -maes: YES 00:02:16.298 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.298 Compiler for C supports arguments -mavx512bw: YES 00:02:16.298 Compiler for C supports arguments -mavx512dq: YES 00:02:16.298 Compiler for C supports arguments -mavx512vl: YES 00:02:16.298 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.298 Compiler for C supports arguments -mavx2: YES 00:02:16.298 Compiler for C supports arguments -mavx: YES 00:02:16.298 Message: lib/net: Defining dependency "net" 00:02:16.298 Message: lib/meter: Defining dependency "meter" 00:02:16.298 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.298 Message: lib/pci: Defining dependency "pci" 00:02:16.298 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.298 Message: lib/metrics: Defining dependency "metrics" 00:02:16.298 Message: lib/hash: Defining dependency "hash" 00:02:16.298 Message: lib/timer: Defining dependency "timer" 00:02:16.298 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.298 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:16.298 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:16.298 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:16.298 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:16.298 Message: lib/acl: Defining dependency "acl" 00:02:16.298 Message: lib/bbdev: Defining dependency "bbdev" 00:02:16.298 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:16.298 Run-time dependency libelf found: YES 0.191 00:02:16.298 Message: lib/bpf: Defining dependency "bpf" 00:02:16.298 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:16.298 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.298 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.298 Message: lib/distributor: Defining dependency "distributor" 00:02:16.298 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.298 Message: lib/efd: Defining dependency "efd" 00:02:16.298 Message: lib/eventdev: Defining dependency "eventdev" 00:02:16.298 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:16.298 Message: lib/gpudev: Defining dependency "gpudev" 00:02:16.298 Message: lib/gro: Defining dependency "gro" 00:02:16.298 Message: lib/gso: Defining dependency "gso" 00:02:16.298 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:16.298 Message: lib/jobstats: Defining dependency "jobstats" 00:02:16.298 Message: lib/latencystats: Defining dependency "latencystats" 00:02:16.298 Message: lib/lpm: Defining dependency "lpm" 00:02:16.298 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.298 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:16.298 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:16.298 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:16.298 Message: lib/member: Defining dependency "member" 00:02:16.298 Message: lib/pcapng: Defining dependency "pcapng" 00:02:16.299 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:16.299 Message: lib/power: Defining dependency "power" 00:02:16.299 Message: lib/rawdev: Defining dependency "rawdev" 00:02:16.299 Message: lib/regexdev: Defining dependency "regexdev" 00:02:16.299 Message: lib/mldev: Defining dependency "mldev" 00:02:16.299 Message: lib/rib: Defining dependency "rib" 00:02:16.299 Message: lib/reorder: Defining dependency "reorder" 00:02:16.299 Message: lib/sched: Defining dependency "sched" 00:02:16.299 Message: lib/security: Defining dependency "security" 00:02:16.299 Message: lib/stack: Defining dependency "stack" 00:02:16.299 Has header "linux/userfaultfd.h" : YES 00:02:16.299 Has header "linux/vduse.h" : YES 00:02:16.299 Message: lib/vhost: Defining dependency "vhost" 00:02:16.299 Message: lib/ipsec: Defining dependency "ipsec" 00:02:16.299 Message: lib/pdcp: Defining dependency "pdcp" 00:02:16.299 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:16.299 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:16.299 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:16.299 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:16.299 Message: lib/fib: Defining dependency "fib" 00:02:16.299 Message: lib/port: Defining dependency "port" 00:02:16.299 Message: lib/pdump: Defining dependency "pdump" 00:02:16.299 Message: lib/table: Defining dependency "table" 00:02:16.299 Message: lib/pipeline: Defining dependency "pipeline" 00:02:16.299 Message: lib/graph: Defining dependency "graph" 00:02:16.299 Message: lib/node: Defining dependency "node" 00:02:16.299 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.201 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.201 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.201 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.201 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:18.201 Compiler for C supports arguments -Wno-unused-value: YES 00:02:18.201 Compiler for C supports arguments -Wno-format: YES 00:02:18.201 Compiler for C supports arguments -Wno-format-security: YES 00:02:18.201 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:18.201 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:18.201 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:18.201 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:18.201 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.201 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.201 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:18.201 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:18.201 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:18.201 Has header "sys/epoll.h" : YES 00:02:18.201 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:18.201 Configuring doxy-api-html.conf using configuration 00:02:18.201 Configuring doxy-api-man.conf using configuration 00:02:18.201 Program mandb found: YES (/usr/bin/mandb) 00:02:18.201 Program sphinx-build found: NO 00:02:18.201 Configuring rte_build_config.h using configuration 00:02:18.201 Message: 00:02:18.201 ================= 00:02:18.201 Applications Enabled 00:02:18.201 ================= 00:02:18.201 00:02:18.201 apps: 00:02:18.201 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:18.201 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:18.201 test-pmd, test-regex, test-sad, test-security-perf, 00:02:18.201 00:02:18.201 Message: 00:02:18.201 ================= 00:02:18.201 Libraries Enabled 00:02:18.201 ================= 00:02:18.201 00:02:18.201 libs: 00:02:18.201 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:18.201 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:18.201 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:18.201 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:18.201 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:18.201 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:18.201 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:18.201 00:02:18.201 00:02:18.201 Message: 00:02:18.201 =============== 00:02:18.201 Drivers Enabled 00:02:18.201 =============== 00:02:18.201 00:02:18.201 common: 00:02:18.201 00:02:18.201 bus: 00:02:18.201 pci, vdev, 00:02:18.201 mempool: 00:02:18.201 ring, 00:02:18.201 dma: 00:02:18.201 00:02:18.201 net: 00:02:18.201 i40e, 00:02:18.201 raw: 00:02:18.201 00:02:18.201 crypto: 00:02:18.201 00:02:18.201 compress: 00:02:18.201 00:02:18.201 regex: 00:02:18.201 00:02:18.201 ml: 00:02:18.201 00:02:18.201 vdpa: 00:02:18.201 00:02:18.201 event: 00:02:18.201 00:02:18.201 baseband: 00:02:18.201 00:02:18.201 gpu: 00:02:18.201 00:02:18.201 00:02:18.201 Message: 00:02:18.201 ================= 00:02:18.201 Content Skipped 00:02:18.201 ================= 00:02:18.201 00:02:18.201 apps: 00:02:18.201 00:02:18.201 libs: 00:02:18.201 00:02:18.201 drivers: 00:02:18.201 common/cpt: not in enabled drivers build config 00:02:18.201 common/dpaax: not in enabled drivers build config 00:02:18.201 common/iavf: not in enabled drivers build config 00:02:18.201 common/idpf: not in enabled drivers build config 00:02:18.201 common/mvep: not in enabled drivers build config 00:02:18.201 common/octeontx: not in enabled drivers build config 00:02:18.201 bus/auxiliary: not in enabled drivers build config 00:02:18.201 bus/cdx: not in enabled drivers build config 00:02:18.201 bus/dpaa: not in enabled drivers build config 00:02:18.201 bus/fslmc: not in enabled drivers build config 00:02:18.201 bus/ifpga: not in enabled drivers build config 00:02:18.201 bus/platform: not in enabled drivers build config 00:02:18.201 bus/vmbus: not in enabled drivers build config 00:02:18.201 common/cnxk: not in enabled drivers build config 00:02:18.201 common/mlx5: not in enabled drivers build config 00:02:18.201 common/nfp: not in enabled drivers build config 00:02:18.201 common/qat: not in enabled drivers build config 00:02:18.201 common/sfc_efx: not in enabled drivers build config 00:02:18.201 mempool/bucket: not in enabled drivers build config 00:02:18.201 mempool/cnxk: not in enabled drivers build config 00:02:18.201 mempool/dpaa: not in enabled drivers build config 00:02:18.201 mempool/dpaa2: not in enabled drivers build config 00:02:18.201 mempool/octeontx: not in enabled drivers build config 00:02:18.201 mempool/stack: not in enabled drivers build config 00:02:18.201 dma/cnxk: not in enabled drivers build config 00:02:18.202 dma/dpaa: not in enabled drivers build config 00:02:18.202 dma/dpaa2: not in enabled drivers build config 00:02:18.202 dma/hisilicon: not in enabled drivers build config 00:02:18.202 dma/idxd: not in enabled drivers build config 00:02:18.202 dma/ioat: not in enabled drivers build config 00:02:18.202 dma/skeleton: not in enabled drivers build config 00:02:18.202 net/af_packet: not in enabled drivers build config 00:02:18.202 net/af_xdp: not in enabled drivers build config 00:02:18.202 net/ark: not in enabled drivers build config 00:02:18.202 net/atlantic: not in enabled drivers build config 00:02:18.202 net/avp: not in enabled drivers build config 00:02:18.202 net/axgbe: not in enabled drivers build config 00:02:18.202 net/bnx2x: not in enabled drivers build config 00:02:18.202 net/bnxt: not in enabled drivers build config 00:02:18.202 net/bonding: not in enabled drivers build config 00:02:18.202 net/cnxk: not in enabled drivers build config 00:02:18.202 net/cpfl: not in enabled drivers build config 00:02:18.202 net/cxgbe: not in enabled drivers build config 00:02:18.202 net/dpaa: not in enabled drivers build config 00:02:18.202 net/dpaa2: not in enabled drivers build config 00:02:18.202 net/e1000: not in enabled drivers build config 00:02:18.202 net/ena: not in enabled drivers build config 00:02:18.202 net/enetc: not in enabled drivers build config 00:02:18.202 net/enetfec: not in enabled drivers build config 00:02:18.202 net/enic: not in enabled drivers build config 00:02:18.202 net/failsafe: not in enabled drivers build config 00:02:18.202 net/fm10k: not in enabled drivers build config 00:02:18.202 net/gve: not in enabled drivers build config 00:02:18.202 net/hinic: not in enabled drivers build config 00:02:18.202 net/hns3: not in enabled drivers build config 00:02:18.202 net/iavf: not in enabled drivers build config 00:02:18.202 net/ice: not in enabled drivers build config 00:02:18.202 net/idpf: not in enabled drivers build config 00:02:18.202 net/igc: not in enabled drivers build config 00:02:18.202 net/ionic: not in enabled drivers build config 00:02:18.202 net/ipn3ke: not in enabled drivers build config 00:02:18.202 net/ixgbe: not in enabled drivers build config 00:02:18.202 net/mana: not in enabled drivers build config 00:02:18.202 net/memif: not in enabled drivers build config 00:02:18.202 net/mlx4: not in enabled drivers build config 00:02:18.202 net/mlx5: not in enabled drivers build config 00:02:18.202 net/mvneta: not in enabled drivers build config 00:02:18.202 net/mvpp2: not in enabled drivers build config 00:02:18.202 net/netvsc: not in enabled drivers build config 00:02:18.202 net/nfb: not in enabled drivers build config 00:02:18.202 net/nfp: not in enabled drivers build config 00:02:18.202 net/ngbe: not in enabled drivers build config 00:02:18.202 net/null: not in enabled drivers build config 00:02:18.202 net/octeontx: not in enabled drivers build config 00:02:18.202 net/octeon_ep: not in enabled drivers build config 00:02:18.202 net/pcap: not in enabled drivers build config 00:02:18.202 net/pfe: not in enabled drivers build config 00:02:18.202 net/qede: not in enabled drivers build config 00:02:18.202 net/ring: not in enabled drivers build config 00:02:18.202 net/sfc: not in enabled drivers build config 00:02:18.202 net/softnic: not in enabled drivers build config 00:02:18.202 net/tap: not in enabled drivers build config 00:02:18.202 net/thunderx: not in enabled drivers build config 00:02:18.202 net/txgbe: not in enabled drivers build config 00:02:18.202 net/vdev_netvsc: not in enabled drivers build config 00:02:18.202 net/vhost: not in enabled drivers build config 00:02:18.202 net/virtio: not in enabled drivers build config 00:02:18.202 net/vmxnet3: not in enabled drivers build config 00:02:18.202 raw/cnxk_bphy: not in enabled drivers build config 00:02:18.202 raw/cnxk_gpio: not in enabled drivers build config 00:02:18.202 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:18.202 raw/ifpga: not in enabled drivers build config 00:02:18.202 raw/ntb: not in enabled drivers build config 00:02:18.202 raw/skeleton: not in enabled drivers build config 00:02:18.202 crypto/armv8: not in enabled drivers build config 00:02:18.202 crypto/bcmfs: not in enabled drivers build config 00:02:18.202 crypto/caam_jr: not in enabled drivers build config 00:02:18.202 crypto/ccp: not in enabled drivers build config 00:02:18.202 crypto/cnxk: not in enabled drivers build config 00:02:18.202 crypto/dpaa_sec: not in enabled drivers build config 00:02:18.202 crypto/dpaa2_sec: not in enabled drivers build config 00:02:18.202 crypto/ipsec_mb: not in enabled drivers build config 00:02:18.202 crypto/mlx5: not in enabled drivers build config 00:02:18.202 crypto/mvsam: not in enabled drivers build config 00:02:18.202 crypto/nitrox: not in enabled drivers build config 00:02:18.202 crypto/null: not in enabled drivers build config 00:02:18.202 crypto/octeontx: not in enabled drivers build config 00:02:18.202 crypto/openssl: not in enabled drivers build config 00:02:18.202 crypto/scheduler: not in enabled drivers build config 00:02:18.202 crypto/uadk: not in enabled drivers build config 00:02:18.202 crypto/virtio: not in enabled drivers build config 00:02:18.202 compress/isal: not in enabled drivers build config 00:02:18.202 compress/mlx5: not in enabled drivers build config 00:02:18.202 compress/octeontx: not in enabled drivers build config 00:02:18.202 compress/zlib: not in enabled drivers build config 00:02:18.202 regex/mlx5: not in enabled drivers build config 00:02:18.202 regex/cn9k: not in enabled drivers build config 00:02:18.202 ml/cnxk: not in enabled drivers build config 00:02:18.202 vdpa/ifc: not in enabled drivers build config 00:02:18.202 vdpa/mlx5: not in enabled drivers build config 00:02:18.202 vdpa/nfp: not in enabled drivers build config 00:02:18.202 vdpa/sfc: not in enabled drivers build config 00:02:18.202 event/cnxk: not in enabled drivers build config 00:02:18.202 event/dlb2: not in enabled drivers build config 00:02:18.202 event/dpaa: not in enabled drivers build config 00:02:18.202 event/dpaa2: not in enabled drivers build config 00:02:18.202 event/dsw: not in enabled drivers build config 00:02:18.202 event/opdl: not in enabled drivers build config 00:02:18.202 event/skeleton: not in enabled drivers build config 00:02:18.202 event/sw: not in enabled drivers build config 00:02:18.202 event/octeontx: not in enabled drivers build config 00:02:18.202 baseband/acc: not in enabled drivers build config 00:02:18.202 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:18.202 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:18.202 baseband/la12xx: not in enabled drivers build config 00:02:18.202 baseband/null: not in enabled drivers build config 00:02:18.202 baseband/turbo_sw: not in enabled drivers build config 00:02:18.202 gpu/cuda: not in enabled drivers build config 00:02:18.202 00:02:18.202 00:02:18.202 Build targets in project: 220 00:02:18.202 00:02:18.202 DPDK 23.11.0 00:02:18.202 00:02:18.202 User defined options 00:02:18.202 libdir : lib 00:02:18.202 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:18.202 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:18.202 c_link_args : 00:02:18.202 enable_docs : false 00:02:18.202 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:18.202 enable_kmods : false 00:02:18.202 machine : native 00:02:18.202 tests : false 00:02:18.202 00:02:18.202 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.202 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:18.202 04:05:30 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:18.460 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:18.460 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:18.460 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.460 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.460 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.460 [5/710] Linking static target lib/librte_kvargs.a 00:02:18.460 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.460 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.718 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.718 [9/710] Linking static target lib/librte_log.a 00:02:18.718 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.718 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.977 [12/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.977 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.977 [14/710] Linking target lib/librte_log.so.24.0 00:02:18.977 [15/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:19.235 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:19.235 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.235 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:19.235 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.235 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.493 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:19.493 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.493 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:19.493 [24/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:19.493 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.752 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.752 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:19.752 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.752 [29/710] Linking static target lib/librte_telemetry.a 00:02:19.752 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.009 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:20.009 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:20.009 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.009 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.266 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:20.266 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.266 [37/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.267 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.267 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.267 [40/710] Linking target lib/librte_telemetry.so.24.0 00:02:20.267 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.267 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.267 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.525 [44/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:20.525 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.525 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.784 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.784 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:21.043 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:21.043 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:21.043 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.043 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:21.043 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:21.043 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.301 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:21.301 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:21.301 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:21.301 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:21.301 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:21.559 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.559 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:21.559 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.559 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.559 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.559 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.816 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.816 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.816 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.074 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.074 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.074 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.074 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.074 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.074 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.074 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.074 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.074 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.332 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.332 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.590 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.590 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.590 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.848 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.848 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.848 [85/710] Linking static target lib/librte_ring.a 00:02:22.848 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.107 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.107 [88/710] Linking static target lib/librte_eal.a 00:02:23.107 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.107 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.368 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.368 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.368 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.368 [94/710] Linking static target lib/librte_mempool.a 00:02:23.368 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.626 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.626 [97/710] Linking static target lib/librte_rcu.a 00:02:23.626 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.626 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.883 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.883 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.883 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.883 [103/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.883 [104/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.141 [105/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.141 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.141 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.141 [108/710] Linking static target lib/librte_mbuf.a 00:02:24.398 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.398 [110/710] Linking static target lib/librte_net.a 00:02:24.655 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.655 [112/710] Linking static target lib/librte_meter.a 00:02:24.655 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.655 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.655 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.655 [116/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.655 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.655 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.914 [119/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.481 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.481 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:25.740 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.740 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.740 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.740 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.740 [126/710] Linking static target lib/librte_pci.a 00:02:25.999 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.999 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.999 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.999 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.999 [131/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.258 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.258 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.258 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.258 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.258 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.258 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.258 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.258 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.258 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.517 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:26.517 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:26.776 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.776 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:26.776 [145/710] Linking static target lib/librte_cmdline.a 00:02:27.034 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.034 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.034 [148/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.034 [149/710] Linking static target lib/librte_metrics.a 00:02:27.034 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.293 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.552 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.811 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.811 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.811 [155/710] Linking static target lib/librte_timer.a 00:02:28.073 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.073 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.331 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.591 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.883 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:28.883 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.142 [162/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.142 [163/710] Linking static target lib/librte_ethdev.a 00:02:29.142 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.142 [165/710] Linking static target lib/librte_bitratestats.a 00:02:29.142 [166/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.401 [167/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.401 [168/710] Linking static target lib/librte_hash.a 00:02:29.401 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.401 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.401 [171/710] Linking target lib/librte_eal.so.24.0 00:02:29.401 [172/710] Linking static target lib/librte_bbdev.a 00:02:29.401 [173/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.401 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:29.663 [175/710] Linking target lib/librte_ring.so.24.0 00:02:29.663 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:29.663 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:29.663 [178/710] Linking target lib/librte_meter.so.24.0 00:02:29.663 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:29.663 [180/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:29.663 [181/710] Linking target lib/librte_mempool.so.24.0 00:02:29.922 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:29.922 [183/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:29.922 [184/710] Linking target lib/librte_pci.so.24.0 00:02:29.922 [185/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:29.922 [186/710] Linking target lib/librte_timer.so.24.0 00:02:29.922 [187/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.922 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:02:29.922 [189/710] Linking target lib/librte_mbuf.so.24.0 00:02:29.922 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.922 [191/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:29.922 [192/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:30.182 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.182 [194/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:30.182 [195/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:30.182 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:02:30.182 [197/710] Linking target lib/librte_bbdev.so.24.0 00:02:30.182 [198/710] Linking target lib/librte_net.so.24.0 00:02:30.182 [199/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:30.182 [200/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.182 [201/710] Linking static target lib/librte_acl.a 00:02:30.182 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:30.441 [203/710] Linking target lib/librte_cmdline.so.24.0 00:02:30.441 [204/710] Linking target lib/librte_hash.so.24.0 00:02:30.441 [205/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:30.441 [206/710] Linking static target lib/librte_cfgfile.a 00:02:30.441 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:30.700 [208/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.700 [209/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:30.700 [210/710] Linking target lib/librte_acl.so.24.0 00:02:30.700 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:30.700 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:30.959 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.959 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:02:30.959 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:30.959 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:31.218 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:31.218 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.477 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.477 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:31.477 [221/710] Linking static target lib/librte_bpf.a 00:02:31.477 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.477 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:31.477 [224/710] Linking static target lib/librte_compressdev.a 00:02:31.736 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.736 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.736 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:31.994 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:31.994 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:31.994 [230/710] Linking static target lib/librte_distributor.a 00:02:31.994 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.994 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.994 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:32.253 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.253 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:32.253 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.253 [237/710] Linking static target lib/librte_dmadev.a 00:02:32.511 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:32.769 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.769 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:32.769 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:32.769 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:33.027 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:33.286 [244/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:33.286 [245/710] Linking static target lib/librte_efd.a 00:02:33.286 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:33.286 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.545 [248/710] Linking static target lib/librte_cryptodev.a 00:02:33.545 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:33.545 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.545 [251/710] Linking target lib/librte_efd.so.24.0 00:02:33.803 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:33.803 [253/710] Linking static target lib/librte_dispatcher.a 00:02:34.060 [254/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.060 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:34.060 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:34.319 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:34.319 [258/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:34.319 [259/710] Linking target lib/librte_metrics.so.24.0 00:02:34.319 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:34.319 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:34.319 [262/710] Linking static target lib/librte_gpudev.a 00:02:34.319 [263/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.319 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:34.319 [265/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:34.319 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:34.319 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:34.319 [268/710] Linking target lib/librte_bitratestats.so.24.0 00:02:34.943 [269/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:34.943 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.943 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:34.943 [272/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:34.943 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:35.217 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:35.217 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.217 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:35.217 [277/710] Linking target lib/librte_gpudev.so.24.0 00:02:35.217 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:35.217 [279/710] Linking static target lib/librte_eventdev.a 00:02:35.217 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:35.217 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:35.217 [282/710] Linking static target lib/librte_gro.a 00:02:35.217 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:35.476 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:35.476 [285/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.476 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:35.476 [287/710] Linking target lib/librte_gro.so.24.0 00:02:35.476 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:35.734 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:35.734 [290/710] Linking static target lib/librte_gso.a 00:02:35.993 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:35.993 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.993 [293/710] Linking target lib/librte_gso.so.24.0 00:02:35.993 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:35.993 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:35.993 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:36.251 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:36.251 [298/710] Linking static target lib/librte_jobstats.a 00:02:36.251 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:36.251 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:36.251 [301/710] Linking static target lib/librte_latencystats.a 00:02:36.251 [302/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:36.251 [303/710] Linking static target lib/librte_ip_frag.a 00:02:36.510 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.510 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:36.510 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.510 [307/710] Linking target lib/librte_latencystats.so.24.0 00:02:36.510 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.769 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:36.769 [310/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:36.769 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:36.769 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:36.769 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:36.769 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.769 [315/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:37.027 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.027 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.027 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.286 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:37.286 [320/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:37.286 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:37.286 [322/710] Linking target lib/librte_dispatcher.so.24.0 00:02:37.286 [323/710] Linking static target lib/librte_lpm.a 00:02:37.286 [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:37.544 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.544 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:37.544 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:37.544 [328/710] Linking static target lib/librte_pcapng.a 00:02:37.544 [329/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:37.802 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.802 [331/710] Linking target lib/librte_lpm.so.24.0 00:02:37.802 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.802 [333/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.802 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.802 [335/710] Linking target lib/librte_pcapng.so.24.0 00:02:38.060 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:38.060 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:38.060 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.060 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:38.319 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.319 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:38.319 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.319 [343/710] Linking static target lib/librte_power.a 00:02:38.577 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:38.577 [345/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:38.577 [346/710] Linking static target lib/librte_rawdev.a 00:02:38.577 [347/710] Linking static target lib/librte_regexdev.a 00:02:38.577 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:38.577 [349/710] Linking static target lib/librte_member.a 00:02:38.577 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:38.837 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:38.837 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:38.837 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.837 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:38.837 [355/710] Linking static target lib/librte_mldev.a 00:02:38.837 [356/710] Linking target lib/librte_member.so.24.0 00:02:39.095 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.095 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:39.095 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.095 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:39.095 [361/710] Linking target lib/librte_power.so.24.0 00:02:39.095 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:39.353 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.353 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:39.353 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:39.613 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:39.613 [367/710] Linking static target lib/librte_rib.a 00:02:39.613 [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.613 [369/710] Linking static target lib/librte_reorder.a 00:02:39.613 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.613 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:39.613 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:39.613 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:39.872 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.872 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:39.872 [376/710] Linking static target lib/librte_stack.a 00:02:39.872 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:39.872 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.872 [379/710] Linking static target lib/librte_security.a 00:02:40.130 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.130 [381/710] Linking target lib/librte_rib.so.24.0 00:02:40.130 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:40.130 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.130 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.131 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:40.131 [386/710] Linking target lib/librte_stack.so.24.0 00:02:40.131 [387/710] Linking target lib/librte_mldev.so.24.0 00:02:40.388 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.388 [389/710] Linking target lib/librte_security.so.24.0 00:02:40.388 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.388 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.647 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:40.647 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.904 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:40.904 [395/710] Linking static target lib/librte_sched.a 00:02:41.161 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.161 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.161 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.418 [399/710] Linking target lib/librte_sched.so.24.0 00:02:41.418 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:41.418 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.676 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:41.676 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:41.955 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:42.223 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:42.223 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:42.223 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:42.223 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:42.223 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:42.481 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:42.481 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:42.481 [412/710] Linking static target lib/librte_ipsec.a 00:02:42.739 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:42.996 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.996 [415/710] Linking target lib/librte_ipsec.so.24.0 00:02:42.996 [416/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:42.996 [417/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:42.996 [418/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:42.996 [419/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:42.996 [420/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:42.996 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.996 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:42.996 [423/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:43.951 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:43.951 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:43.951 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:43.951 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:43.951 [428/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:44.210 [429/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:44.210 [430/710] Linking static target lib/librte_fib.a 00:02:44.210 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:44.210 [432/710] Linking static target lib/librte_pdcp.a 00:02:44.469 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.469 [434/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.469 [435/710] Linking target lib/librte_fib.so.24.0 00:02:44.469 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:44.469 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:45.035 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:45.035 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:45.035 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:45.294 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:45.294 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:45.294 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:45.294 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:45.553 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:45.812 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:45.812 [447/710] Linking static target lib/librte_port.a 00:02:45.812 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:46.071 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:46.071 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:46.072 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:46.072 [452/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.331 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.331 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:46.331 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:46.331 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:46.331 [457/710] Linking target lib/librte_port.so.24.0 00:02:46.331 [458/710] Linking static target lib/librte_pdump.a 00:02:46.331 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:46.590 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.590 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:46.590 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:46.850 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:47.109 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:47.109 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:47.109 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:47.109 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:47.368 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:47.368 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:47.368 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:47.368 [471/710] Linking static target lib/librte_table.a 00:02:47.627 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:47.627 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:48.196 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.196 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:48.196 [476/710] Linking target lib/librte_table.so.24.0 00:02:48.196 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:48.455 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:48.455 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:48.714 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:48.714 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:48.973 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:49.232 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:49.232 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:49.232 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:49.232 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:49.799 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:49.799 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:49.799 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:49.799 [490/710] Linking static target lib/librte_graph.a 00:02:49.799 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:50.058 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:50.058 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:50.379 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.379 [495/710] Linking target lib/librte_graph.so.24.0 00:02:50.379 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:50.653 [497/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:50.653 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:50.653 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:50.913 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:51.171 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:51.171 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:51.171 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:51.171 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:51.171 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.429 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:51.429 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:51.429 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:51.996 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.996 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.996 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.996 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.996 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:51.996 [514/710] Linking static target lib/librte_node.a 00:02:51.996 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.255 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.255 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.255 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.255 [519/710] Linking target lib/librte_node.so.24.0 00:02:52.514 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.514 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.514 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.514 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.514 [524/710] Linking static target drivers/librte_bus_vdev.a 00:02:52.514 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.773 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.773 [527/710] Linking static target drivers/librte_bus_pci.a 00:02:53.032 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.032 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.032 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.032 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:53.032 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:53.032 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:53.032 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:53.032 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:53.290 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.290 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:53.290 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.290 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.290 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:53.548 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.548 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.548 [543/710] Linking static target drivers/librte_mempool_ring.a 00:02:53.548 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.548 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:53.548 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:54.116 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:54.375 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:54.375 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:54.375 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:54.375 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:55.313 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:55.313 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:55.313 [554/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:55.313 [555/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:55.313 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:55.313 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:55.882 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:55.882 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:56.142 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:56.142 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:56.142 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:56.708 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:56.708 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:56.967 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:56.967 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.226 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:57.496 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:57.496 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:57.496 [570/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:57.496 [571/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:57.496 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:57.777 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:57.777 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.777 [575/710] Linking static target lib/librte_vhost.a 00:02:57.777 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:58.036 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:58.036 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:58.295 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:58.295 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:58.295 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:58.295 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:58.553 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:58.812 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.812 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.812 [586/710] Linking static target drivers/librte_net_i40e.a 00:02:58.812 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.812 [588/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:58.812 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.812 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:59.071 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:59.071 [592/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.071 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:59.071 [594/710] Linking target lib/librte_vhost.so.24.0 00:02:59.330 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.330 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:59.588 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:59.588 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.588 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.846 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:00.105 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:00.105 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:00.105 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:00.364 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:00.364 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:00.364 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:00.623 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:00.881 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.140 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.140 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.140 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.140 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.140 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.398 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:01.398 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:01.398 [616/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:01.398 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:01.657 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:01.915 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:01.915 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:02.174 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.174 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:02.433 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.433 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:02.691 [625/710] Linking static target lib/librte_pipeline.a 00:03:02.949 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.207 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.207 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.207 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.465 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.465 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.465 [632/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.465 [633/710] Linking target app/dpdk-dumpcap 00:03:03.722 [634/710] Linking target app/dpdk-graph 00:03:03.722 [635/710] Linking target app/dpdk-proc-info 00:03:03.722 [636/710] Linking target app/dpdk-pdump 00:03:03.979 [637/710] Linking target app/dpdk-test-acl 00:03:03.979 [638/710] Linking target app/dpdk-test-compress-perf 00:03:03.979 [639/710] Linking target app/dpdk-test-cmdline 00:03:03.979 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:04.236 [641/710] Linking target app/dpdk-test-dma-perf 00:03:04.236 [642/710] Linking target app/dpdk-test-fib 00:03:04.236 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:04.493 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:04.493 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:04.799 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:04.799 [647/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.799 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.799 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.799 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:05.088 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:05.088 [652/710] Linking target app/dpdk-test-gpudev 00:03:05.346 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:05.346 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:05.346 [655/710] Linking target app/dpdk-test-eventdev 00:03:05.346 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:05.346 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:05.605 [658/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.605 [659/710] Linking target lib/librte_pipeline.so.24.0 00:03:05.863 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:05.863 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:05.863 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:05.863 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:05.863 [664/710] Linking target app/dpdk-test-flow-perf 00:03:05.863 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:06.122 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:06.122 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:06.380 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.380 [669/710] Linking target app/dpdk-test-bbdev 00:03:06.380 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.638 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.638 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.638 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.895 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.895 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:06.895 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:07.153 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:07.412 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:07.412 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:07.412 [680/710] Linking target app/dpdk-test-mldev 00:03:07.412 [681/710] Linking target app/dpdk-test-pipeline 00:03:07.669 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.669 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:08.236 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:08.236 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:08.236 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:08.236 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:08.236 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.494 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:08.753 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:09.011 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:09.011 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:09.011 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:09.580 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:09.580 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.580 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:10.146 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:10.146 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:10.146 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:10.146 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:10.146 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:10.405 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:10.405 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.405 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:10.405 [705/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:10.663 [706/710] Linking target app/dpdk-test-regex 00:03:10.663 [707/710] Linking target app/dpdk-test-sad 00:03:10.921 [708/710] Linking target app/dpdk-testpmd 00:03:11.179 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:11.746 [710/710] Linking target app/dpdk-test-security-perf 00:03:11.746 04:06:24 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:11.746 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:11.746 [0/1] Installing files. 00:03:12.010 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:12.010 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.011 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.012 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.013 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.014 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.015 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.016 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.016 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.016 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.276 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.277 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.541 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.541 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.541 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.541 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.541 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.541 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.542 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.543 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.544 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.544 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:12.544 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:12.544 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:12.544 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:12.544 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:12.544 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:12.544 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:12.544 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:12.544 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:12.544 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:12.544 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:12.544 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:12.544 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:12.544 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:12.544 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:12.544 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:12.544 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:12.544 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:12.544 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:12.544 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:12.544 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:12.544 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:12.544 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:12.544 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:12.544 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:12.544 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:12.544 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:12.544 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:12.544 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:12.544 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:12.544 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:12.544 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:12.544 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:12.544 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:12.544 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:12.544 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:12.544 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:12.544 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:12.544 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:12.544 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:12.544 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:12.544 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:12.544 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:12.544 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:12.544 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:12.544 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:12.544 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:12.544 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:12.544 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:12.544 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:12.544 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:12.544 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:12.544 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:12.544 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:12.544 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:12.544 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:12.544 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:12.544 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:12.545 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:12.545 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:12.545 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:12.545 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:12.545 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:12.545 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:12.545 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:12.545 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:12.545 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:12.545 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:12.545 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:12.545 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:12.545 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:12.545 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:12.545 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:12.545 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:12.545 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:12.545 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:12.545 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:12.545 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:12.545 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:12.545 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:12.545 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:12.545 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:12.545 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:12.545 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:12.545 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:12.545 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:12.545 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:12.545 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:12.545 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:12.545 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:12.545 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:12.545 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:12.545 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:12.545 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:12.545 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:12.545 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:12.545 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:12.545 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:12.545 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:12.545 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:12.545 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:12.545 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:12.545 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:12.545 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:12.545 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:12.545 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:12.545 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:12.545 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:12.545 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:12.545 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:12.545 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:12.545 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:12.545 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:12.545 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:12.545 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:12.545 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:12.545 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:12.545 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:12.545 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:12.545 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:12.545 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:12.545 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:12.545 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:12.545 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:12.545 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:12.545 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:12.545 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:12.545 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:12.545 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:12.545 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:12.545 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:12.545 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:12.545 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:12.804 04:06:25 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:12.804 04:06:25 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:12.804 04:06:25 -- common/autobuild_common.sh@203 -- $ cat 00:03:12.804 04:06:25 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:12.804 00:03:12.804 real 1m1.797s 00:03:12.804 user 7m27.411s 00:03:12.804 sys 1m11.636s 00:03:12.804 04:06:25 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:12.804 04:06:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.804 ************************************ 00:03:12.804 END TEST build_native_dpdk 00:03:12.804 ************************************ 00:03:12.804 04:06:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.804 04:06:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.804 04:06:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:12.804 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:13.061 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:13.061 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:13.061 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:13.319 Using 'verbs' RDMA provider 00:03:26.455 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:41.381 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:41.381 Creating mk/config.mk...done. 00:03:41.381 Creating mk/cc.flags.mk...done. 00:03:41.381 Type 'make' to build. 00:03:41.381 04:06:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:41.381 04:06:51 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:41.381 04:06:51 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:41.381 04:06:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:41.381 ************************************ 00:03:41.381 START TEST make 00:03:41.381 ************************************ 00:03:41.381 04:06:51 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:41.381 make[1]: Nothing to be done for 'all'. 00:04:07.922 CC lib/ut_mock/mock.o 00:04:07.922 CC lib/log/log_flags.o 00:04:07.922 CC lib/log/log.o 00:04:07.922 CC lib/log/log_deprecated.o 00:04:07.922 CC lib/ut/ut.o 00:04:07.922 LIB libspdk_ut_mock.a 00:04:07.922 LIB libspdk_ut.a 00:04:07.922 LIB libspdk_log.a 00:04:07.922 SO libspdk_ut_mock.so.5.0 00:04:07.922 SO libspdk_ut.so.1.0 00:04:07.922 SO libspdk_log.so.6.1 00:04:07.922 SYMLINK libspdk_ut_mock.so 00:04:07.922 SYMLINK libspdk_ut.so 00:04:07.922 SYMLINK libspdk_log.so 00:04:07.922 CC lib/dma/dma.o 00:04:07.922 CC lib/util/base64.o 00:04:07.922 CC lib/util/bit_array.o 00:04:07.922 CXX lib/trace_parser/trace.o 00:04:07.922 CC lib/util/cpuset.o 00:04:07.922 CC lib/util/crc16.o 00:04:07.922 CC lib/util/crc32.o 00:04:07.922 CC lib/util/crc32c.o 00:04:07.922 CC lib/ioat/ioat.o 00:04:07.922 CC lib/vfio_user/host/vfio_user_pci.o 00:04:07.922 CC lib/util/crc32_ieee.o 00:04:07.922 CC lib/util/crc64.o 00:04:07.922 CC lib/util/dif.o 00:04:07.922 CC lib/vfio_user/host/vfio_user.o 00:04:07.922 CC lib/util/fd.o 00:04:07.922 LIB libspdk_dma.a 00:04:07.922 SO libspdk_dma.so.3.0 00:04:07.922 CC lib/util/file.o 00:04:07.922 CC lib/util/hexlify.o 00:04:07.922 CC lib/util/iov.o 00:04:07.922 SYMLINK libspdk_dma.so 00:04:07.922 CC lib/util/math.o 00:04:07.922 CC lib/util/pipe.o 00:04:07.922 CC lib/util/strerror_tls.o 00:04:07.922 LIB libspdk_ioat.a 00:04:07.922 LIB libspdk_vfio_user.a 00:04:07.922 SO libspdk_ioat.so.6.0 00:04:07.923 CC lib/util/string.o 00:04:07.923 SO libspdk_vfio_user.so.4.0 00:04:07.923 CC lib/util/uuid.o 00:04:07.923 SYMLINK libspdk_ioat.so 00:04:07.923 CC lib/util/fd_group.o 00:04:07.923 SYMLINK libspdk_vfio_user.so 00:04:07.923 CC lib/util/xor.o 00:04:07.923 CC lib/util/zipf.o 00:04:07.923 LIB libspdk_util.a 00:04:07.923 SO libspdk_util.so.8.0 00:04:07.923 SYMLINK libspdk_util.so 00:04:07.923 LIB libspdk_trace_parser.a 00:04:07.923 SO libspdk_trace_parser.so.4.0 00:04:07.923 CC lib/json/json_parse.o 00:04:07.923 CC lib/json/json_util.o 00:04:07.923 CC lib/json/json_write.o 00:04:07.923 CC lib/conf/conf.o 00:04:07.923 CC lib/vmd/vmd.o 00:04:07.923 CC lib/vmd/led.o 00:04:07.923 CC lib/rdma/common.o 00:04:07.923 CC lib/idxd/idxd.o 00:04:07.923 CC lib/env_dpdk/env.o 00:04:07.923 SYMLINK libspdk_trace_parser.so 00:04:07.923 CC lib/idxd/idxd_user.o 00:04:07.923 CC lib/idxd/idxd_kernel.o 00:04:07.923 CC lib/rdma/rdma_verbs.o 00:04:07.923 CC lib/env_dpdk/memory.o 00:04:07.923 LIB libspdk_conf.a 00:04:07.923 SO libspdk_conf.so.5.0 00:04:07.923 LIB libspdk_json.a 00:04:07.923 CC lib/env_dpdk/pci.o 00:04:07.923 SO libspdk_json.so.5.1 00:04:07.923 SYMLINK libspdk_conf.so 00:04:07.923 CC lib/env_dpdk/init.o 00:04:07.923 CC lib/env_dpdk/threads.o 00:04:07.923 CC lib/env_dpdk/pci_ioat.o 00:04:07.923 SYMLINK libspdk_json.so 00:04:07.923 LIB libspdk_rdma.a 00:04:07.923 SO libspdk_rdma.so.5.0 00:04:07.923 CC lib/env_dpdk/pci_virtio.o 00:04:07.923 CC lib/jsonrpc/jsonrpc_server.o 00:04:07.923 CC lib/env_dpdk/pci_vmd.o 00:04:07.923 SYMLINK libspdk_rdma.so 00:04:07.923 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:07.923 LIB libspdk_idxd.a 00:04:07.923 SO libspdk_idxd.so.11.0 00:04:07.923 CC lib/env_dpdk/pci_idxd.o 00:04:07.923 LIB libspdk_vmd.a 00:04:07.923 CC lib/env_dpdk/pci_event.o 00:04:07.923 CC lib/env_dpdk/sigbus_handler.o 00:04:07.923 SO libspdk_vmd.so.5.0 00:04:07.923 SYMLINK libspdk_idxd.so 00:04:07.923 CC lib/env_dpdk/pci_dpdk.o 00:04:07.923 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:07.923 CC lib/jsonrpc/jsonrpc_client.o 00:04:07.923 SYMLINK libspdk_vmd.so 00:04:07.923 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:07.923 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:07.923 LIB libspdk_jsonrpc.a 00:04:07.923 SO libspdk_jsonrpc.so.5.1 00:04:07.923 SYMLINK libspdk_jsonrpc.so 00:04:07.923 CC lib/rpc/rpc.o 00:04:07.923 LIB libspdk_rpc.a 00:04:07.923 LIB libspdk_env_dpdk.a 00:04:07.923 SO libspdk_rpc.so.5.0 00:04:07.923 SYMLINK libspdk_rpc.so 00:04:07.923 SO libspdk_env_dpdk.so.13.0 00:04:07.923 CC lib/trace/trace.o 00:04:07.923 CC lib/trace/trace_flags.o 00:04:07.923 CC lib/trace/trace_rpc.o 00:04:07.923 SYMLINK libspdk_env_dpdk.so 00:04:07.923 CC lib/notify/notify.o 00:04:07.923 CC lib/notify/notify_rpc.o 00:04:07.923 CC lib/sock/sock.o 00:04:07.923 CC lib/sock/sock_rpc.o 00:04:07.923 LIB libspdk_notify.a 00:04:07.923 SO libspdk_notify.so.5.0 00:04:07.923 LIB libspdk_trace.a 00:04:07.923 SYMLINK libspdk_notify.so 00:04:07.923 SO libspdk_trace.so.9.0 00:04:07.923 SYMLINK libspdk_trace.so 00:04:07.923 LIB libspdk_sock.a 00:04:07.923 SO libspdk_sock.so.8.0 00:04:07.923 SYMLINK libspdk_sock.so 00:04:07.923 CC lib/thread/iobuf.o 00:04:07.923 CC lib/thread/thread.o 00:04:07.923 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:07.923 CC lib/nvme/nvme_fabric.o 00:04:07.923 CC lib/nvme/nvme_ctrlr.o 00:04:07.923 CC lib/nvme/nvme_ns_cmd.o 00:04:07.923 CC lib/nvme/nvme_pcie_common.o 00:04:07.923 CC lib/nvme/nvme_ns.o 00:04:07.923 CC lib/nvme/nvme_pcie.o 00:04:07.923 CC lib/nvme/nvme_qpair.o 00:04:08.181 CC lib/nvme/nvme.o 00:04:08.439 CC lib/nvme/nvme_quirks.o 00:04:08.439 CC lib/nvme/nvme_transport.o 00:04:08.439 CC lib/nvme/nvme_discovery.o 00:04:08.697 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:08.697 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:08.697 CC lib/nvme/nvme_tcp.o 00:04:08.955 CC lib/nvme/nvme_opal.o 00:04:08.955 CC lib/nvme/nvme_io_msg.o 00:04:09.214 CC lib/nvme/nvme_poll_group.o 00:04:09.214 CC lib/nvme/nvme_zns.o 00:04:09.214 LIB libspdk_thread.a 00:04:09.214 CC lib/nvme/nvme_cuse.o 00:04:09.214 SO libspdk_thread.so.9.0 00:04:09.472 SYMLINK libspdk_thread.so 00:04:09.472 CC lib/nvme/nvme_vfio_user.o 00:04:09.472 CC lib/nvme/nvme_rdma.o 00:04:09.472 CC lib/accel/accel.o 00:04:09.472 CC lib/blob/blobstore.o 00:04:09.731 CC lib/accel/accel_rpc.o 00:04:09.731 CC lib/accel/accel_sw.o 00:04:09.731 CC lib/blob/request.o 00:04:09.731 CC lib/blob/zeroes.o 00:04:09.989 CC lib/blob/blob_bs_dev.o 00:04:10.247 CC lib/init/json_config.o 00:04:10.247 CC lib/init/subsystem.o 00:04:10.247 CC lib/virtio/virtio.o 00:04:10.247 CC lib/init/subsystem_rpc.o 00:04:10.247 CC lib/init/rpc.o 00:04:10.247 CC lib/virtio/virtio_vhost_user.o 00:04:10.247 CC lib/virtio/virtio_vfio_user.o 00:04:10.247 CC lib/virtio/virtio_pci.o 00:04:10.247 LIB libspdk_init.a 00:04:10.505 SO libspdk_init.so.4.0 00:04:10.505 SYMLINK libspdk_init.so 00:04:10.506 LIB libspdk_accel.a 00:04:10.506 SO libspdk_accel.so.14.0 00:04:10.506 LIB libspdk_virtio.a 00:04:10.764 CC lib/event/app.o 00:04:10.764 CC lib/event/scheduler_static.o 00:04:10.764 CC lib/event/log_rpc.o 00:04:10.764 CC lib/event/reactor.o 00:04:10.764 CC lib/event/app_rpc.o 00:04:10.764 SYMLINK libspdk_accel.so 00:04:10.764 SO libspdk_virtio.so.6.0 00:04:10.764 SYMLINK libspdk_virtio.so 00:04:10.764 CC lib/bdev/bdev.o 00:04:10.764 CC lib/bdev/bdev_rpc.o 00:04:10.764 CC lib/bdev/bdev_zone.o 00:04:10.764 CC lib/bdev/part.o 00:04:10.764 CC lib/bdev/scsi_nvme.o 00:04:11.023 LIB libspdk_nvme.a 00:04:11.023 LIB libspdk_event.a 00:04:11.023 SO libspdk_nvme.so.12.0 00:04:11.282 SO libspdk_event.so.12.0 00:04:11.282 SYMLINK libspdk_event.so 00:04:11.282 SYMLINK libspdk_nvme.so 00:04:12.660 LIB libspdk_blob.a 00:04:12.660 SO libspdk_blob.so.10.1 00:04:12.661 SYMLINK libspdk_blob.so 00:04:12.661 CC lib/blobfs/blobfs.o 00:04:12.661 CC lib/blobfs/tree.o 00:04:12.661 CC lib/lvol/lvol.o 00:04:13.596 LIB libspdk_bdev.a 00:04:13.596 SO libspdk_bdev.so.14.0 00:04:13.596 LIB libspdk_blobfs.a 00:04:13.596 LIB libspdk_lvol.a 00:04:13.596 SO libspdk_blobfs.so.9.0 00:04:13.596 SO libspdk_lvol.so.9.1 00:04:13.855 SYMLINK libspdk_bdev.so 00:04:13.855 SYMLINK libspdk_lvol.so 00:04:13.855 SYMLINK libspdk_blobfs.so 00:04:13.855 CC lib/scsi/dev.o 00:04:13.855 CC lib/scsi/port.o 00:04:13.855 CC lib/scsi/lun.o 00:04:13.855 CC lib/nbd/nbd_rpc.o 00:04:13.855 CC lib/ublk/ublk.o 00:04:13.855 CC lib/scsi/scsi.o 00:04:13.855 CC lib/scsi/scsi_bdev.o 00:04:13.855 CC lib/ftl/ftl_core.o 00:04:13.855 CC lib/nbd/nbd.o 00:04:13.855 CC lib/nvmf/ctrlr.o 00:04:14.115 CC lib/nvmf/ctrlr_discovery.o 00:04:14.115 CC lib/nvmf/ctrlr_bdev.o 00:04:14.115 CC lib/ftl/ftl_init.o 00:04:14.115 CC lib/ftl/ftl_layout.o 00:04:14.374 CC lib/scsi/scsi_pr.o 00:04:14.374 CC lib/nvmf/subsystem.o 00:04:14.374 CC lib/nvmf/nvmf.o 00:04:14.374 LIB libspdk_nbd.a 00:04:14.374 SO libspdk_nbd.so.6.0 00:04:14.374 SYMLINK libspdk_nbd.so 00:04:14.374 CC lib/ublk/ublk_rpc.o 00:04:14.374 CC lib/ftl/ftl_debug.o 00:04:14.633 CC lib/scsi/scsi_rpc.o 00:04:14.633 CC lib/scsi/task.o 00:04:14.633 CC lib/nvmf/nvmf_rpc.o 00:04:14.633 CC lib/ftl/ftl_io.o 00:04:14.633 LIB libspdk_ublk.a 00:04:14.633 CC lib/ftl/ftl_sb.o 00:04:14.633 SO libspdk_ublk.so.2.0 00:04:14.633 CC lib/nvmf/transport.o 00:04:14.633 LIB libspdk_scsi.a 00:04:14.633 SYMLINK libspdk_ublk.so 00:04:14.633 CC lib/nvmf/tcp.o 00:04:14.892 CC lib/nvmf/rdma.o 00:04:14.892 SO libspdk_scsi.so.8.0 00:04:14.893 CC lib/ftl/ftl_l2p.o 00:04:14.893 CC lib/ftl/ftl_l2p_flat.o 00:04:14.893 SYMLINK libspdk_scsi.so 00:04:14.893 CC lib/ftl/ftl_nv_cache.o 00:04:15.152 CC lib/ftl/ftl_band.o 00:04:15.152 CC lib/ftl/ftl_band_ops.o 00:04:15.412 CC lib/iscsi/conn.o 00:04:15.412 CC lib/iscsi/init_grp.o 00:04:15.412 CC lib/iscsi/iscsi.o 00:04:15.412 CC lib/iscsi/md5.o 00:04:15.412 CC lib/iscsi/param.o 00:04:15.412 CC lib/vhost/vhost.o 00:04:15.412 CC lib/vhost/vhost_rpc.o 00:04:15.672 CC lib/iscsi/portal_grp.o 00:04:15.672 CC lib/iscsi/tgt_node.o 00:04:15.931 CC lib/iscsi/iscsi_subsystem.o 00:04:15.931 CC lib/ftl/ftl_writer.o 00:04:15.931 CC lib/iscsi/iscsi_rpc.o 00:04:15.931 CC lib/iscsi/task.o 00:04:16.190 CC lib/ftl/ftl_rq.o 00:04:16.190 CC lib/vhost/vhost_scsi.o 00:04:16.190 CC lib/vhost/vhost_blk.o 00:04:16.190 CC lib/vhost/rte_vhost_user.o 00:04:16.190 CC lib/ftl/ftl_reloc.o 00:04:16.190 CC lib/ftl/ftl_l2p_cache.o 00:04:16.190 CC lib/ftl/ftl_p2l.o 00:04:16.190 CC lib/ftl/mngt/ftl_mngt.o 00:04:16.190 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:16.450 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:16.709 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:16.709 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:16.709 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:16.709 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:16.709 LIB libspdk_iscsi.a 00:04:16.709 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:16.969 LIB libspdk_nvmf.a 00:04:16.969 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:16.969 SO libspdk_iscsi.so.7.0 00:04:16.969 SO libspdk_nvmf.so.17.0 00:04:16.969 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:16.969 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:16.969 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:16.969 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:16.969 SYMLINK libspdk_iscsi.so 00:04:16.969 CC lib/ftl/utils/ftl_conf.o 00:04:16.969 CC lib/ftl/utils/ftl_md.o 00:04:17.235 SYMLINK libspdk_nvmf.so 00:04:17.235 CC lib/ftl/utils/ftl_mempool.o 00:04:17.235 CC lib/ftl/utils/ftl_bitmap.o 00:04:17.235 CC lib/ftl/utils/ftl_property.o 00:04:17.235 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:17.235 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:17.235 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:17.235 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:17.235 LIB libspdk_vhost.a 00:04:17.235 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:17.235 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:17.493 SO libspdk_vhost.so.7.1 00:04:17.493 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:17.493 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:17.493 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:17.493 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:17.493 CC lib/ftl/base/ftl_base_dev.o 00:04:17.493 CC lib/ftl/base/ftl_base_bdev.o 00:04:17.493 SYMLINK libspdk_vhost.so 00:04:17.493 CC lib/ftl/ftl_trace.o 00:04:17.751 LIB libspdk_ftl.a 00:04:18.009 SO libspdk_ftl.so.8.0 00:04:18.268 SYMLINK libspdk_ftl.so 00:04:18.527 CC module/env_dpdk/env_dpdk_rpc.o 00:04:18.527 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:18.527 CC module/accel/iaa/accel_iaa.o 00:04:18.527 CC module/accel/error/accel_error.o 00:04:18.527 CC module/scheduler/gscheduler/gscheduler.o 00:04:18.527 CC module/accel/dsa/accel_dsa.o 00:04:18.527 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:18.527 CC module/accel/ioat/accel_ioat.o 00:04:18.527 CC module/blob/bdev/blob_bdev.o 00:04:18.527 CC module/sock/posix/posix.o 00:04:18.785 LIB libspdk_env_dpdk_rpc.a 00:04:18.785 SO libspdk_env_dpdk_rpc.so.5.0 00:04:18.785 LIB libspdk_scheduler_gscheduler.a 00:04:18.785 SYMLINK libspdk_env_dpdk_rpc.so 00:04:18.785 CC module/accel/ioat/accel_ioat_rpc.o 00:04:18.785 LIB libspdk_scheduler_dpdk_governor.a 00:04:18.785 CC module/accel/error/accel_error_rpc.o 00:04:18.785 SO libspdk_scheduler_gscheduler.so.3.0 00:04:18.785 CC module/accel/iaa/accel_iaa_rpc.o 00:04:18.785 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:18.785 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.785 LIB libspdk_scheduler_dynamic.a 00:04:18.785 CC module/accel/dsa/accel_dsa_rpc.o 00:04:18.785 SO libspdk_scheduler_dynamic.so.3.0 00:04:18.785 LIB libspdk_blob_bdev.a 00:04:18.785 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:19.044 LIB libspdk_accel_ioat.a 00:04:19.044 SO libspdk_blob_bdev.so.10.1 00:04:19.044 SYMLINK libspdk_scheduler_dynamic.so 00:04:19.044 SO libspdk_accel_ioat.so.5.0 00:04:19.044 LIB libspdk_accel_error.a 00:04:19.044 LIB libspdk_accel_iaa.a 00:04:19.044 CC module/sock/uring/uring.o 00:04:19.044 SYMLINK libspdk_blob_bdev.so 00:04:19.044 SO libspdk_accel_error.so.1.0 00:04:19.044 SO libspdk_accel_iaa.so.2.0 00:04:19.044 SYMLINK libspdk_accel_ioat.so 00:04:19.044 LIB libspdk_accel_dsa.a 00:04:19.044 SO libspdk_accel_dsa.so.4.0 00:04:19.044 SYMLINK libspdk_accel_error.so 00:04:19.044 SYMLINK libspdk_accel_iaa.so 00:04:19.044 SYMLINK libspdk_accel_dsa.so 00:04:19.303 CC module/bdev/lvol/vbdev_lvol.o 00:04:19.303 CC module/bdev/gpt/gpt.o 00:04:19.303 CC module/bdev/error/vbdev_error.o 00:04:19.303 CC module/bdev/null/bdev_null.o 00:04:19.303 CC module/bdev/delay/vbdev_delay.o 00:04:19.303 CC module/blobfs/bdev/blobfs_bdev.o 00:04:19.303 CC module/bdev/malloc/bdev_malloc.o 00:04:19.303 CC module/bdev/nvme/bdev_nvme.o 00:04:19.303 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:19.303 LIB libspdk_sock_posix.a 00:04:19.303 CC module/bdev/gpt/vbdev_gpt.o 00:04:19.303 SO libspdk_sock_posix.so.5.0 00:04:19.562 CC module/bdev/error/vbdev_error_rpc.o 00:04:19.562 SYMLINK libspdk_sock_posix.so 00:04:19.562 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:19.562 CC module/bdev/null/bdev_null_rpc.o 00:04:19.562 LIB libspdk_blobfs_bdev.a 00:04:19.562 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:19.562 SO libspdk_blobfs_bdev.so.5.0 00:04:19.563 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:19.563 LIB libspdk_bdev_error.a 00:04:19.563 LIB libspdk_sock_uring.a 00:04:19.563 SYMLINK libspdk_blobfs_bdev.so 00:04:19.563 LIB libspdk_bdev_gpt.a 00:04:19.563 CC module/bdev/nvme/nvme_rpc.o 00:04:19.563 SO libspdk_bdev_error.so.5.0 00:04:19.841 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:19.841 LIB libspdk_bdev_null.a 00:04:19.841 SO libspdk_sock_uring.so.4.0 00:04:19.841 SO libspdk_bdev_gpt.so.5.0 00:04:19.841 SO libspdk_bdev_null.so.5.0 00:04:19.841 SYMLINK libspdk_bdev_error.so 00:04:19.841 LIB libspdk_bdev_malloc.a 00:04:19.841 SYMLINK libspdk_sock_uring.so 00:04:19.841 SYMLINK libspdk_bdev_gpt.so 00:04:19.841 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.841 SO libspdk_bdev_malloc.so.5.0 00:04:19.841 SYMLINK libspdk_bdev_null.so 00:04:19.841 LIB libspdk_bdev_delay.a 00:04:19.841 SO libspdk_bdev_delay.so.5.0 00:04:19.841 SYMLINK libspdk_bdev_malloc.so 00:04:19.841 CC module/bdev/passthru/vbdev_passthru.o 00:04:19.841 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:19.841 CC module/bdev/raid/bdev_raid.o 00:04:19.841 SYMLINK libspdk_bdev_delay.so 00:04:19.841 CC module/bdev/raid/bdev_raid_rpc.o 00:04:19.841 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.841 CC module/bdev/split/vbdev_split.o 00:04:19.841 CC module/bdev/split/vbdev_split_rpc.o 00:04:20.101 LIB libspdk_bdev_lvol.a 00:04:20.101 SO libspdk_bdev_lvol.so.5.0 00:04:20.101 SYMLINK libspdk_bdev_lvol.so 00:04:20.101 CC module/bdev/raid/raid0.o 00:04:20.101 CC module/bdev/nvme/vbdev_opal.o 00:04:20.101 CC module/bdev/raid/raid1.o 00:04:20.101 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:20.101 LIB libspdk_bdev_passthru.a 00:04:20.101 LIB libspdk_bdev_split.a 00:04:20.101 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:20.101 SO libspdk_bdev_split.so.5.0 00:04:20.101 SO libspdk_bdev_passthru.so.5.0 00:04:20.360 CC module/bdev/uring/bdev_uring.o 00:04:20.360 SYMLINK libspdk_bdev_passthru.so 00:04:20.360 SYMLINK libspdk_bdev_split.so 00:04:20.360 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:20.360 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:20.360 CC module/bdev/aio/bdev_aio.o 00:04:20.360 CC module/bdev/raid/concat.o 00:04:20.360 CC module/bdev/uring/bdev_uring_rpc.o 00:04:20.620 LIB libspdk_bdev_zone_block.a 00:04:20.620 CC module/bdev/aio/bdev_aio_rpc.o 00:04:20.620 CC module/bdev/ftl/bdev_ftl.o 00:04:20.620 SO libspdk_bdev_zone_block.so.5.0 00:04:20.620 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:20.620 SYMLINK libspdk_bdev_zone_block.so 00:04:20.620 LIB libspdk_bdev_uring.a 00:04:20.620 SO libspdk_bdev_uring.so.5.0 00:04:20.620 CC module/bdev/iscsi/bdev_iscsi.o 00:04:20.620 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:20.620 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:20.620 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:20.620 SYMLINK libspdk_bdev_uring.so 00:04:20.620 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:20.620 LIB libspdk_bdev_aio.a 00:04:20.879 LIB libspdk_bdev_raid.a 00:04:20.879 SO libspdk_bdev_aio.so.5.0 00:04:20.879 LIB libspdk_bdev_ftl.a 00:04:20.879 SO libspdk_bdev_raid.so.5.0 00:04:20.879 SYMLINK libspdk_bdev_aio.so 00:04:20.879 SO libspdk_bdev_ftl.so.5.0 00:04:20.879 SYMLINK libspdk_bdev_raid.so 00:04:20.879 SYMLINK libspdk_bdev_ftl.so 00:04:21.138 LIB libspdk_bdev_iscsi.a 00:04:21.138 SO libspdk_bdev_iscsi.so.5.0 00:04:21.138 SYMLINK libspdk_bdev_iscsi.so 00:04:21.138 LIB libspdk_bdev_virtio.a 00:04:21.397 SO libspdk_bdev_virtio.so.5.0 00:04:21.397 SYMLINK libspdk_bdev_virtio.so 00:04:21.397 LIB libspdk_bdev_nvme.a 00:04:21.656 SO libspdk_bdev_nvme.so.6.0 00:04:21.656 SYMLINK libspdk_bdev_nvme.so 00:04:21.914 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.914 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.914 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.914 CC module/event/subsystems/vmd/vmd.o 00:04:21.914 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.914 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.914 CC module/event/subsystems/sock/sock.o 00:04:22.173 LIB libspdk_event_scheduler.a 00:04:22.173 LIB libspdk_event_vhost_blk.a 00:04:22.173 SO libspdk_event_vhost_blk.so.2.0 00:04:22.173 SO libspdk_event_scheduler.so.3.0 00:04:22.173 LIB libspdk_event_sock.a 00:04:22.173 LIB libspdk_event_vmd.a 00:04:22.173 LIB libspdk_event_iobuf.a 00:04:22.173 SO libspdk_event_sock.so.4.0 00:04:22.173 SO libspdk_event_vmd.so.5.0 00:04:22.173 SO libspdk_event_iobuf.so.2.0 00:04:22.173 SYMLINK libspdk_event_vhost_blk.so 00:04:22.173 SYMLINK libspdk_event_scheduler.so 00:04:22.173 SYMLINK libspdk_event_sock.so 00:04:22.173 SYMLINK libspdk_event_iobuf.so 00:04:22.173 SYMLINK libspdk_event_vmd.so 00:04:22.432 CC module/event/subsystems/accel/accel.o 00:04:22.432 LIB libspdk_event_accel.a 00:04:22.691 SO libspdk_event_accel.so.5.0 00:04:22.691 SYMLINK libspdk_event_accel.so 00:04:22.949 CC module/event/subsystems/bdev/bdev.o 00:04:22.949 LIB libspdk_event_bdev.a 00:04:22.949 SO libspdk_event_bdev.so.5.0 00:04:23.208 SYMLINK libspdk_event_bdev.so 00:04:23.208 CC module/event/subsystems/ublk/ublk.o 00:04:23.208 CC module/event/subsystems/nbd/nbd.o 00:04:23.208 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.208 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.208 CC module/event/subsystems/scsi/scsi.o 00:04:23.466 LIB libspdk_event_ublk.a 00:04:23.466 LIB libspdk_event_nbd.a 00:04:23.466 LIB libspdk_event_scsi.a 00:04:23.466 SO libspdk_event_nbd.so.5.0 00:04:23.466 SO libspdk_event_ublk.so.2.0 00:04:23.466 SO libspdk_event_scsi.so.5.0 00:04:23.466 SYMLINK libspdk_event_ublk.so 00:04:23.466 SYMLINK libspdk_event_nbd.so 00:04:23.466 LIB libspdk_event_nvmf.a 00:04:23.466 SYMLINK libspdk_event_scsi.so 00:04:23.725 SO libspdk_event_nvmf.so.5.0 00:04:23.725 SYMLINK libspdk_event_nvmf.so 00:04:23.725 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:23.725 CC module/event/subsystems/iscsi/iscsi.o 00:04:23.985 LIB libspdk_event_vhost_scsi.a 00:04:23.985 SO libspdk_event_vhost_scsi.so.2.0 00:04:23.985 LIB libspdk_event_iscsi.a 00:04:23.985 SO libspdk_event_iscsi.so.5.0 00:04:23.985 SYMLINK libspdk_event_vhost_scsi.so 00:04:23.985 SYMLINK libspdk_event_iscsi.so 00:04:24.243 SO libspdk.so.5.0 00:04:24.244 SYMLINK libspdk.so 00:04:24.244 CC app/trace_record/trace_record.o 00:04:24.244 CXX app/trace/trace.o 00:04:24.502 CC app/nvmf_tgt/nvmf_main.o 00:04:24.502 CC app/iscsi_tgt/iscsi_tgt.o 00:04:24.502 CC examples/accel/perf/accel_perf.o 00:04:24.502 CC app/spdk_tgt/spdk_tgt.o 00:04:24.502 CC test/blobfs/mkfs/mkfs.o 00:04:24.502 CC test/bdev/bdevio/bdevio.o 00:04:24.502 CC test/app/bdev_svc/bdev_svc.o 00:04:24.502 CC test/accel/dif/dif.o 00:04:24.760 LINK nvmf_tgt 00:04:24.760 LINK iscsi_tgt 00:04:24.760 LINK spdk_trace_record 00:04:24.760 LINK spdk_tgt 00:04:24.760 LINK bdev_svc 00:04:24.760 LINK mkfs 00:04:24.760 LINK spdk_trace 00:04:24.760 CC app/spdk_lspci/spdk_lspci.o 00:04:24.760 LINK dif 00:04:25.018 LINK accel_perf 00:04:25.018 LINK bdevio 00:04:25.018 CC app/spdk_nvme_perf/perf.o 00:04:25.018 TEST_HEADER include/spdk/accel.h 00:04:25.018 TEST_HEADER include/spdk/accel_module.h 00:04:25.018 TEST_HEADER include/spdk/assert.h 00:04:25.018 TEST_HEADER include/spdk/barrier.h 00:04:25.018 TEST_HEADER include/spdk/base64.h 00:04:25.018 TEST_HEADER include/spdk/bdev.h 00:04:25.018 TEST_HEADER include/spdk/bdev_module.h 00:04:25.018 TEST_HEADER include/spdk/bdev_zone.h 00:04:25.018 TEST_HEADER include/spdk/bit_array.h 00:04:25.018 TEST_HEADER include/spdk/bit_pool.h 00:04:25.018 TEST_HEADER include/spdk/blob_bdev.h 00:04:25.018 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:25.018 TEST_HEADER include/spdk/blobfs.h 00:04:25.018 TEST_HEADER include/spdk/blob.h 00:04:25.018 TEST_HEADER include/spdk/conf.h 00:04:25.018 TEST_HEADER include/spdk/config.h 00:04:25.018 TEST_HEADER include/spdk/cpuset.h 00:04:25.018 TEST_HEADER include/spdk/crc16.h 00:04:25.018 TEST_HEADER include/spdk/crc32.h 00:04:25.018 TEST_HEADER include/spdk/crc64.h 00:04:25.018 TEST_HEADER include/spdk/dif.h 00:04:25.018 TEST_HEADER include/spdk/dma.h 00:04:25.018 TEST_HEADER include/spdk/endian.h 00:04:25.018 TEST_HEADER include/spdk/env_dpdk.h 00:04:25.018 TEST_HEADER include/spdk/env.h 00:04:25.018 TEST_HEADER include/spdk/event.h 00:04:25.018 TEST_HEADER include/spdk/fd_group.h 00:04:25.018 TEST_HEADER include/spdk/fd.h 00:04:25.018 TEST_HEADER include/spdk/file.h 00:04:25.018 TEST_HEADER include/spdk/ftl.h 00:04:25.018 TEST_HEADER include/spdk/gpt_spec.h 00:04:25.018 TEST_HEADER include/spdk/hexlify.h 00:04:25.018 CC test/app/histogram_perf/histogram_perf.o 00:04:25.018 TEST_HEADER include/spdk/histogram_data.h 00:04:25.018 TEST_HEADER include/spdk/idxd.h 00:04:25.018 TEST_HEADER include/spdk/idxd_spec.h 00:04:25.018 TEST_HEADER include/spdk/init.h 00:04:25.018 TEST_HEADER include/spdk/ioat.h 00:04:25.018 TEST_HEADER include/spdk/ioat_spec.h 00:04:25.018 TEST_HEADER include/spdk/iscsi_spec.h 00:04:25.018 TEST_HEADER include/spdk/json.h 00:04:25.018 TEST_HEADER include/spdk/jsonrpc.h 00:04:25.018 TEST_HEADER include/spdk/likely.h 00:04:25.018 TEST_HEADER include/spdk/log.h 00:04:25.018 TEST_HEADER include/spdk/lvol.h 00:04:25.018 TEST_HEADER include/spdk/memory.h 00:04:25.018 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:25.018 TEST_HEADER include/spdk/mmio.h 00:04:25.018 TEST_HEADER include/spdk/nbd.h 00:04:25.018 TEST_HEADER include/spdk/notify.h 00:04:25.018 TEST_HEADER include/spdk/nvme.h 00:04:25.018 LINK spdk_lspci 00:04:25.018 TEST_HEADER include/spdk/nvme_intel.h 00:04:25.018 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:25.018 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:25.018 TEST_HEADER include/spdk/nvme_spec.h 00:04:25.018 TEST_HEADER include/spdk/nvme_zns.h 00:04:25.018 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:25.018 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:25.018 CC examples/bdev/hello_world/hello_bdev.o 00:04:25.018 TEST_HEADER include/spdk/nvmf.h 00:04:25.018 TEST_HEADER include/spdk/nvmf_spec.h 00:04:25.018 TEST_HEADER include/spdk/nvmf_transport.h 00:04:25.018 TEST_HEADER include/spdk/opal.h 00:04:25.018 TEST_HEADER include/spdk/opal_spec.h 00:04:25.018 TEST_HEADER include/spdk/pci_ids.h 00:04:25.018 TEST_HEADER include/spdk/pipe.h 00:04:25.018 TEST_HEADER include/spdk/queue.h 00:04:25.018 TEST_HEADER include/spdk/reduce.h 00:04:25.018 TEST_HEADER include/spdk/rpc.h 00:04:25.018 TEST_HEADER include/spdk/scheduler.h 00:04:25.018 TEST_HEADER include/spdk/scsi.h 00:04:25.018 TEST_HEADER include/spdk/scsi_spec.h 00:04:25.018 TEST_HEADER include/spdk/sock.h 00:04:25.018 TEST_HEADER include/spdk/stdinc.h 00:04:25.018 TEST_HEADER include/spdk/string.h 00:04:25.019 TEST_HEADER include/spdk/thread.h 00:04:25.019 TEST_HEADER include/spdk/trace.h 00:04:25.019 TEST_HEADER include/spdk/trace_parser.h 00:04:25.019 TEST_HEADER include/spdk/tree.h 00:04:25.019 TEST_HEADER include/spdk/ublk.h 00:04:25.019 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:25.019 TEST_HEADER include/spdk/util.h 00:04:25.019 TEST_HEADER include/spdk/uuid.h 00:04:25.019 TEST_HEADER include/spdk/version.h 00:04:25.019 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:25.019 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:25.019 TEST_HEADER include/spdk/vhost.h 00:04:25.019 TEST_HEADER include/spdk/vmd.h 00:04:25.019 TEST_HEADER include/spdk/xor.h 00:04:25.019 TEST_HEADER include/spdk/zipf.h 00:04:25.019 CXX test/cpp_headers/accel.o 00:04:25.019 LINK histogram_perf 00:04:25.276 CC test/app/stub/stub.o 00:04:25.276 CC test/app/jsoncat/jsoncat.o 00:04:25.276 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.276 CXX test/cpp_headers/accel_module.o 00:04:25.276 CXX test/cpp_headers/assert.o 00:04:25.276 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:25.276 LINK hello_bdev 00:04:25.276 LINK jsoncat 00:04:25.276 CC examples/bdev/bdevperf/bdevperf.o 00:04:25.534 LINK stub 00:04:25.534 CXX test/cpp_headers/barrier.o 00:04:25.534 LINK nvme_fuzz 00:04:25.534 CC app/spdk_nvme_identify/identify.o 00:04:25.534 CC examples/ioat/perf/perf.o 00:04:25.534 CXX test/cpp_headers/base64.o 00:04:25.534 CC examples/blob/hello_world/hello_blob.o 00:04:25.792 CC examples/blob/cli/blobcli.o 00:04:25.792 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.792 LINK vhost_fuzz 00:04:25.792 LINK spdk_nvme_perf 00:04:25.792 CXX test/cpp_headers/bdev.o 00:04:25.792 LINK ioat_perf 00:04:25.792 CXX test/cpp_headers/bdev_module.o 00:04:25.792 LINK hello_blob 00:04:26.050 LINK spdk_nvme_discover 00:04:26.050 CXX test/cpp_headers/bdev_zone.o 00:04:26.050 CC examples/ioat/verify/verify.o 00:04:26.050 CC test/dma/test_dma/test_dma.o 00:04:26.050 CXX test/cpp_headers/bit_array.o 00:04:26.050 LINK bdevperf 00:04:26.050 CC app/spdk_top/spdk_top.o 00:04:26.050 LINK blobcli 00:04:26.308 CC app/vhost/vhost.o 00:04:26.308 CXX test/cpp_headers/bit_pool.o 00:04:26.308 CXX test/cpp_headers/blob_bdev.o 00:04:26.308 LINK verify 00:04:26.308 LINK spdk_nvme_identify 00:04:26.308 CXX test/cpp_headers/blobfs_bdev.o 00:04:26.308 LINK vhost 00:04:26.308 CC examples/nvme/hello_world/hello_world.o 00:04:26.566 LINK test_dma 00:04:26.566 CC examples/sock/hello_world/hello_sock.o 00:04:26.566 CC examples/vmd/lsvmd/lsvmd.o 00:04:26.566 CXX test/cpp_headers/blobfs.o 00:04:26.566 CC examples/util/zipf/zipf.o 00:04:26.566 CXX test/cpp_headers/blob.o 00:04:26.566 CC examples/nvmf/nvmf/nvmf.o 00:04:26.566 LINK hello_world 00:04:26.566 LINK lsvmd 00:04:26.823 CXX test/cpp_headers/conf.o 00:04:26.824 LINK iscsi_fuzz 00:04:26.824 LINK zipf 00:04:26.824 CC examples/nvme/reconnect/reconnect.o 00:04:26.824 LINK hello_sock 00:04:26.824 CC examples/vmd/led/led.o 00:04:26.824 CXX test/cpp_headers/config.o 00:04:26.824 CXX test/cpp_headers/cpuset.o 00:04:26.824 CXX test/cpp_headers/crc16.o 00:04:27.082 LINK nvmf 00:04:27.082 CC examples/thread/thread/thread_ex.o 00:04:27.082 LINK led 00:04:27.082 CC examples/idxd/perf/perf.o 00:04:27.082 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:27.082 LINK spdk_top 00:04:27.082 CC test/env/vtophys/vtophys.o 00:04:27.082 CXX test/cpp_headers/crc32.o 00:04:27.082 CXX test/cpp_headers/crc64.o 00:04:27.082 LINK reconnect 00:04:27.082 CC test/env/mem_callbacks/mem_callbacks.o 00:04:27.082 CXX test/cpp_headers/dif.o 00:04:27.082 LINK interrupt_tgt 00:04:27.082 LINK thread 00:04:27.340 LINK vtophys 00:04:27.340 CC app/spdk_dd/spdk_dd.o 00:04:27.340 LINK idxd_perf 00:04:27.340 CXX test/cpp_headers/dma.o 00:04:27.340 CXX test/cpp_headers/endian.o 00:04:27.340 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:27.340 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:27.340 CC examples/nvme/arbitration/arbitration.o 00:04:27.340 CXX test/cpp_headers/env_dpdk.o 00:04:27.340 CXX test/cpp_headers/env.o 00:04:27.340 CXX test/cpp_headers/event.o 00:04:27.598 LINK env_dpdk_post_init 00:04:27.598 CXX test/cpp_headers/fd_group.o 00:04:27.598 CC examples/nvme/hotplug/hotplug.o 00:04:27.598 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:27.598 CXX test/cpp_headers/fd.o 00:04:27.598 LINK spdk_dd 00:04:27.598 LINK arbitration 00:04:27.598 CC examples/nvme/abort/abort.o 00:04:27.856 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:27.856 LINK cmb_copy 00:04:27.856 LINK mem_callbacks 00:04:27.856 CXX test/cpp_headers/file.o 00:04:27.856 LINK hotplug 00:04:27.856 CC test/event/event_perf/event_perf.o 00:04:27.856 CXX test/cpp_headers/ftl.o 00:04:27.856 LINK nvme_manage 00:04:27.856 LINK pmr_persistence 00:04:27.856 CXX test/cpp_headers/gpt_spec.o 00:04:27.856 CC test/env/memory/memory_ut.o 00:04:28.113 LINK event_perf 00:04:28.113 CC app/fio/nvme/fio_plugin.o 00:04:28.113 CC test/event/reactor/reactor.o 00:04:28.113 CC app/fio/bdev/fio_plugin.o 00:04:28.113 CC test/event/reactor_perf/reactor_perf.o 00:04:28.113 CXX test/cpp_headers/hexlify.o 00:04:28.113 LINK abort 00:04:28.113 CC test/event/app_repeat/app_repeat.o 00:04:28.113 CC test/event/scheduler/scheduler.o 00:04:28.113 LINK reactor 00:04:28.113 LINK reactor_perf 00:04:28.371 CXX test/cpp_headers/histogram_data.o 00:04:28.371 LINK app_repeat 00:04:28.371 CC test/lvol/esnap/esnap.o 00:04:28.371 CC test/rpc_client/rpc_client_test.o 00:04:28.371 CC test/nvme/aer/aer.o 00:04:28.371 LINK scheduler 00:04:28.371 CXX test/cpp_headers/idxd.o 00:04:28.371 CC test/thread/poller_perf/poller_perf.o 00:04:28.371 CC test/nvme/reset/reset.o 00:04:28.628 LINK spdk_nvme 00:04:28.628 LINK spdk_bdev 00:04:28.628 LINK rpc_client_test 00:04:28.628 CXX test/cpp_headers/idxd_spec.o 00:04:28.628 LINK poller_perf 00:04:28.628 CC test/nvme/sgl/sgl.o 00:04:28.628 LINK aer 00:04:28.628 CC test/nvme/e2edp/nvme_dp.o 00:04:28.628 CC test/nvme/overhead/overhead.o 00:04:28.629 CC test/nvme/err_injection/err_injection.o 00:04:28.629 CXX test/cpp_headers/init.o 00:04:28.888 LINK reset 00:04:28.888 CXX test/cpp_headers/ioat.o 00:04:28.888 LINK memory_ut 00:04:28.888 CC test/nvme/startup/startup.o 00:04:28.888 CXX test/cpp_headers/ioat_spec.o 00:04:28.888 LINK err_injection 00:04:28.888 LINK sgl 00:04:28.888 LINK nvme_dp 00:04:28.888 CC test/nvme/reserve/reserve.o 00:04:28.888 LINK overhead 00:04:29.145 CC test/nvme/simple_copy/simple_copy.o 00:04:29.145 LINK startup 00:04:29.145 CC test/env/pci/pci_ut.o 00:04:29.145 CXX test/cpp_headers/iscsi_spec.o 00:04:29.145 CC test/nvme/connect_stress/connect_stress.o 00:04:29.145 CC test/nvme/compliance/nvme_compliance.o 00:04:29.145 CC test/nvme/boot_partition/boot_partition.o 00:04:29.145 LINK reserve 00:04:29.145 CC test/nvme/fused_ordering/fused_ordering.o 00:04:29.145 CXX test/cpp_headers/json.o 00:04:29.145 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:29.403 LINK simple_copy 00:04:29.403 LINK connect_stress 00:04:29.403 LINK boot_partition 00:04:29.403 CC test/nvme/fdp/fdp.o 00:04:29.403 CXX test/cpp_headers/jsonrpc.o 00:04:29.403 LINK fused_ordering 00:04:29.403 LINK pci_ut 00:04:29.403 LINK doorbell_aers 00:04:29.403 CXX test/cpp_headers/likely.o 00:04:29.403 LINK nvme_compliance 00:04:29.662 CXX test/cpp_headers/log.o 00:04:29.662 CC test/nvme/cuse/cuse.o 00:04:29.662 CXX test/cpp_headers/lvol.o 00:04:29.662 CXX test/cpp_headers/memory.o 00:04:29.662 CXX test/cpp_headers/mmio.o 00:04:29.662 CXX test/cpp_headers/nbd.o 00:04:29.662 CXX test/cpp_headers/notify.o 00:04:29.662 CXX test/cpp_headers/nvme.o 00:04:29.662 CXX test/cpp_headers/nvme_intel.o 00:04:29.662 LINK fdp 00:04:29.662 CXX test/cpp_headers/nvme_ocssd.o 00:04:29.662 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:29.920 CXX test/cpp_headers/nvme_spec.o 00:04:29.920 CXX test/cpp_headers/nvme_zns.o 00:04:29.920 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.920 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:29.920 CXX test/cpp_headers/nvmf.o 00:04:29.920 CXX test/cpp_headers/nvmf_spec.o 00:04:29.920 CXX test/cpp_headers/nvmf_transport.o 00:04:29.920 CXX test/cpp_headers/opal.o 00:04:29.920 CXX test/cpp_headers/opal_spec.o 00:04:29.920 CXX test/cpp_headers/pci_ids.o 00:04:30.178 CXX test/cpp_headers/pipe.o 00:04:30.178 CXX test/cpp_headers/queue.o 00:04:30.178 CXX test/cpp_headers/reduce.o 00:04:30.178 CXX test/cpp_headers/rpc.o 00:04:30.178 CXX test/cpp_headers/scheduler.o 00:04:30.178 CXX test/cpp_headers/scsi.o 00:04:30.178 CXX test/cpp_headers/scsi_spec.o 00:04:30.178 CXX test/cpp_headers/sock.o 00:04:30.178 CXX test/cpp_headers/stdinc.o 00:04:30.178 CXX test/cpp_headers/string.o 00:04:30.178 CXX test/cpp_headers/thread.o 00:04:30.178 CXX test/cpp_headers/trace.o 00:04:30.178 CXX test/cpp_headers/trace_parser.o 00:04:30.178 CXX test/cpp_headers/tree.o 00:04:30.437 CXX test/cpp_headers/ublk.o 00:04:30.437 CXX test/cpp_headers/util.o 00:04:30.437 CXX test/cpp_headers/uuid.o 00:04:30.437 CXX test/cpp_headers/version.o 00:04:30.437 CXX test/cpp_headers/vfio_user_pci.o 00:04:30.437 CXX test/cpp_headers/vfio_user_spec.o 00:04:30.437 CXX test/cpp_headers/vhost.o 00:04:30.437 CXX test/cpp_headers/vmd.o 00:04:30.437 CXX test/cpp_headers/xor.o 00:04:30.437 CXX test/cpp_headers/zipf.o 00:04:30.697 LINK cuse 00:04:32.612 LINK esnap 00:04:32.869 ************************************ 00:04:32.869 END TEST make 00:04:32.869 ************************************ 00:04:32.869 00:04:32.869 real 0m53.520s 00:04:32.869 user 4m58.132s 00:04:32.869 sys 1m1.649s 00:04:32.869 04:07:45 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:32.869 04:07:45 -- common/autotest_common.sh@10 -- $ set +x 00:04:33.127 04:07:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:33.127 04:07:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:33.127 04:07:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:33.127 04:07:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:33.127 04:07:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:33.127 04:07:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:33.127 04:07:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:33.127 04:07:45 -- scripts/common.sh@335 -- # IFS=.-: 00:04:33.127 04:07:45 -- scripts/common.sh@335 -- # read -ra ver1 00:04:33.127 04:07:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.127 04:07:45 -- scripts/common.sh@336 -- # read -ra ver2 00:04:33.127 04:07:45 -- scripts/common.sh@337 -- # local 'op=<' 00:04:33.127 04:07:45 -- scripts/common.sh@339 -- # ver1_l=2 00:04:33.127 04:07:45 -- scripts/common.sh@340 -- # ver2_l=1 00:04:33.127 04:07:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:33.127 04:07:45 -- scripts/common.sh@343 -- # case "$op" in 00:04:33.127 04:07:45 -- scripts/common.sh@344 -- # : 1 00:04:33.127 04:07:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:33.127 04:07:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.127 04:07:45 -- scripts/common.sh@364 -- # decimal 1 00:04:33.127 04:07:45 -- scripts/common.sh@352 -- # local d=1 00:04:33.127 04:07:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.127 04:07:45 -- scripts/common.sh@354 -- # echo 1 00:04:33.127 04:07:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:33.127 04:07:45 -- scripts/common.sh@365 -- # decimal 2 00:04:33.127 04:07:45 -- scripts/common.sh@352 -- # local d=2 00:04:33.127 04:07:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.127 04:07:45 -- scripts/common.sh@354 -- # echo 2 00:04:33.127 04:07:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:33.127 04:07:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:33.128 04:07:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:33.128 04:07:45 -- scripts/common.sh@367 -- # return 0 00:04:33.128 04:07:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.128 04:07:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.128 --rc genhtml_branch_coverage=1 00:04:33.128 --rc genhtml_function_coverage=1 00:04:33.128 --rc genhtml_legend=1 00:04:33.128 --rc geninfo_all_blocks=1 00:04:33.128 --rc geninfo_unexecuted_blocks=1 00:04:33.128 00:04:33.128 ' 00:04:33.128 04:07:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.128 --rc genhtml_branch_coverage=1 00:04:33.128 --rc genhtml_function_coverage=1 00:04:33.128 --rc genhtml_legend=1 00:04:33.128 --rc geninfo_all_blocks=1 00:04:33.128 --rc geninfo_unexecuted_blocks=1 00:04:33.128 00:04:33.128 ' 00:04:33.128 04:07:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.128 --rc genhtml_branch_coverage=1 00:04:33.128 --rc genhtml_function_coverage=1 00:04:33.128 --rc genhtml_legend=1 00:04:33.128 --rc geninfo_all_blocks=1 00:04:33.128 --rc geninfo_unexecuted_blocks=1 00:04:33.128 00:04:33.128 ' 00:04:33.128 04:07:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:33.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.128 --rc genhtml_branch_coverage=1 00:04:33.128 --rc genhtml_function_coverage=1 00:04:33.128 --rc genhtml_legend=1 00:04:33.128 --rc geninfo_all_blocks=1 00:04:33.128 --rc geninfo_unexecuted_blocks=1 00:04:33.128 00:04:33.128 ' 00:04:33.128 04:07:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.128 04:07:45 -- nvmf/common.sh@7 -- # uname -s 00:04:33.128 04:07:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.128 04:07:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.128 04:07:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.128 04:07:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.128 04:07:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.128 04:07:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.128 04:07:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.128 04:07:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.128 04:07:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.128 04:07:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.128 04:07:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:04:33.128 04:07:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:04:33.128 04:07:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.128 04:07:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.128 04:07:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:33.128 04:07:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.128 04:07:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.128 04:07:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.128 04:07:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.128 04:07:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.128 04:07:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.128 04:07:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.128 04:07:45 -- paths/export.sh@5 -- # export PATH 00:04:33.128 04:07:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.128 04:07:45 -- nvmf/common.sh@46 -- # : 0 00:04:33.128 04:07:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:33.128 04:07:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:33.128 04:07:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:33.128 04:07:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.128 04:07:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.128 04:07:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:33.128 04:07:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:33.128 04:07:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:33.128 04:07:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:33.128 04:07:45 -- spdk/autotest.sh@32 -- # uname -s 00:04:33.128 04:07:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:33.128 04:07:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:33.128 04:07:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.128 04:07:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:33.128 04:07:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.385 04:07:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.386 04:07:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.386 04:07:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.386 04:07:45 -- spdk/autotest.sh@48 -- # udevadm_pid=60042 00:04:33.386 04:07:45 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.386 04:07:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.386 04:07:45 -- spdk/autotest.sh@54 -- # echo 60044 00:04:33.386 04:07:45 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.386 04:07:45 -- spdk/autotest.sh@56 -- # echo 60045 00:04:33.386 04:07:45 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:33.386 04:07:45 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:33.386 04:07:45 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.386 04:07:45 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:33.386 04:07:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.386 04:07:45 -- common/autotest_common.sh@10 -- # set +x 00:04:33.386 04:07:45 -- spdk/autotest.sh@70 -- # create_test_list 00:04:33.386 04:07:45 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:33.386 04:07:45 -- common/autotest_common.sh@10 -- # set +x 00:04:33.386 04:07:45 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.386 04:07:45 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.386 04:07:45 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.386 04:07:45 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.386 04:07:45 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.386 04:07:45 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:33.386 04:07:45 -- common/autotest_common.sh@1450 -- # uname 00:04:33.386 04:07:45 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:33.386 04:07:45 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:33.386 04:07:45 -- common/autotest_common.sh@1470 -- # uname 00:04:33.386 04:07:45 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:33.386 04:07:45 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:33.386 04:07:45 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:33.386 lcov: LCOV version 1.15 00:04:33.386 04:07:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:41.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:41.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:41.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:41.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:41.524 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:41.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:03.480 04:08:14 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:03.480 04:08:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.480 04:08:14 -- common/autotest_common.sh@10 -- # set +x 00:05:03.480 04:08:14 -- spdk/autotest.sh@89 -- # rm -f 00:05:03.480 04:08:14 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.480 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:03.480 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:03.480 04:08:15 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:03.480 04:08:15 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:03.480 04:08:15 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:03.480 04:08:15 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:03.480 04:08:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.480 04:08:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:03.480 04:08:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:03.480 04:08:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.480 04:08:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:03.480 04:08:15 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:03.480 04:08:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.480 04:08:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:03.480 04:08:15 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:03.480 04:08:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:03.480 04:08:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:03.480 04:08:15 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:03.480 04:08:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:03.480 04:08:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:03.480 04:08:15 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # grep -v p 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:03.480 04:08:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:03.480 04:08:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:03.480 04:08:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:03.480 04:08:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:03.480 No valid GPT data, bailing 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # pt= 00:05:03.480 04:08:15 -- scripts/common.sh@394 -- # return 1 00:05:03.480 04:08:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:03.480 1+0 records in 00:05:03.480 1+0 records out 00:05:03.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524028 s, 200 MB/s 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:03.480 04:08:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:03.480 04:08:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:03.480 04:08:15 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:03.480 04:08:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:03.480 No valid GPT data, bailing 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # pt= 00:05:03.480 04:08:15 -- scripts/common.sh@394 -- # return 1 00:05:03.480 04:08:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:03.480 1+0 records in 00:05:03.480 1+0 records out 00:05:03.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00487632 s, 215 MB/s 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:03.480 04:08:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:03.480 04:08:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:03.480 04:08:15 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:03.480 04:08:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:03.480 No valid GPT data, bailing 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # pt= 00:05:03.480 04:08:15 -- scripts/common.sh@394 -- # return 1 00:05:03.480 04:08:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:03.480 1+0 records in 00:05:03.480 1+0 records out 00:05:03.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050511 s, 208 MB/s 00:05:03.480 04:08:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:03.480 04:08:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:03.480 04:08:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:03.480 04:08:15 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:03.480 04:08:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:03.480 No valid GPT data, bailing 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:03.480 04:08:15 -- scripts/common.sh@393 -- # pt= 00:05:03.480 04:08:15 -- scripts/common.sh@394 -- # return 1 00:05:03.480 04:08:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:03.480 1+0 records in 00:05:03.480 1+0 records out 00:05:03.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044926 s, 233 MB/s 00:05:03.480 04:08:15 -- spdk/autotest.sh@116 -- # sync 00:05:03.480 04:08:15 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:03.480 04:08:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:03.480 04:08:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:05.385 04:08:17 -- spdk/autotest.sh@122 -- # uname -s 00:05:05.385 04:08:17 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:05.385 04:08:17 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:05.385 04:08:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.385 04:08:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.385 04:08:17 -- common/autotest_common.sh@10 -- # set +x 00:05:05.385 ************************************ 00:05:05.385 START TEST setup.sh 00:05:05.385 ************************************ 00:05:05.385 04:08:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:05.385 * Looking for test storage... 00:05:05.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.385 04:08:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.385 04:08:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.385 04:08:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.649 04:08:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.649 04:08:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.649 04:08:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.649 04:08:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.649 04:08:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.649 04:08:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.649 04:08:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.649 04:08:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.649 04:08:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.649 04:08:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.649 04:08:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.649 04:08:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.649 04:08:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.649 04:08:17 -- scripts/common.sh@344 -- # : 1 00:05:05.649 04:08:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.649 04:08:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.649 04:08:17 -- scripts/common.sh@364 -- # decimal 1 00:05:05.649 04:08:17 -- scripts/common.sh@352 -- # local d=1 00:05:05.649 04:08:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.649 04:08:17 -- scripts/common.sh@354 -- # echo 1 00:05:05.649 04:08:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.649 04:08:17 -- scripts/common.sh@365 -- # decimal 2 00:05:05.649 04:08:18 -- scripts/common.sh@352 -- # local d=2 00:05:05.649 04:08:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.649 04:08:18 -- scripts/common.sh@354 -- # echo 2 00:05:05.649 04:08:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.649 04:08:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.649 04:08:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.649 04:08:18 -- scripts/common.sh@367 -- # return 0 00:05:05.649 04:08:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.649 04:08:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 04:08:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 04:08:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.649 --rc geninfo_all_blocks=1 00:05:05.649 --rc geninfo_unexecuted_blocks=1 00:05:05.649 00:05:05.649 ' 00:05:05.649 04:08:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.649 --rc genhtml_branch_coverage=1 00:05:05.649 --rc genhtml_function_coverage=1 00:05:05.649 --rc genhtml_legend=1 00:05:05.650 --rc geninfo_all_blocks=1 00:05:05.650 --rc geninfo_unexecuted_blocks=1 00:05:05.650 00:05:05.650 ' 00:05:05.650 04:08:18 -- setup/test-setup.sh@10 -- # uname -s 00:05:05.650 04:08:18 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:05.650 04:08:18 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:05.650 04:08:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.650 04:08:18 -- common/autotest_common.sh@10 -- # set +x 00:05:05.650 ************************************ 00:05:05.650 START TEST acl 00:05:05.650 ************************************ 00:05:05.650 04:08:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:05.650 * Looking for test storage... 00:05:05.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.650 04:08:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.650 04:08:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.650 04:08:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.650 04:08:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.650 04:08:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.650 04:08:18 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.650 04:08:18 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.650 04:08:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.650 04:08:18 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.650 04:08:18 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.650 04:08:18 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.650 04:08:18 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.650 04:08:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.650 04:08:18 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.650 04:08:18 -- scripts/common.sh@344 -- # : 1 00:05:05.650 04:08:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.650 04:08:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.650 04:08:18 -- scripts/common.sh@364 -- # decimal 1 00:05:05.650 04:08:18 -- scripts/common.sh@352 -- # local d=1 00:05:05.650 04:08:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.650 04:08:18 -- scripts/common.sh@354 -- # echo 1 00:05:05.650 04:08:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.650 04:08:18 -- scripts/common.sh@365 -- # decimal 2 00:05:05.650 04:08:18 -- scripts/common.sh@352 -- # local d=2 00:05:05.650 04:08:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.650 04:08:18 -- scripts/common.sh@354 -- # echo 2 00:05:05.650 04:08:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.650 04:08:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.650 04:08:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.650 04:08:18 -- scripts/common.sh@367 -- # return 0 00:05:05.650 04:08:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.650 --rc genhtml_branch_coverage=1 00:05:05.650 --rc genhtml_function_coverage=1 00:05:05.650 --rc genhtml_legend=1 00:05:05.650 --rc geninfo_all_blocks=1 00:05:05.650 --rc geninfo_unexecuted_blocks=1 00:05:05.650 00:05:05.650 ' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.650 --rc genhtml_branch_coverage=1 00:05:05.650 --rc genhtml_function_coverage=1 00:05:05.650 --rc genhtml_legend=1 00:05:05.650 --rc geninfo_all_blocks=1 00:05:05.650 --rc geninfo_unexecuted_blocks=1 00:05:05.650 00:05:05.650 ' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.650 --rc genhtml_branch_coverage=1 00:05:05.650 --rc genhtml_function_coverage=1 00:05:05.650 --rc genhtml_legend=1 00:05:05.650 --rc geninfo_all_blocks=1 00:05:05.650 --rc geninfo_unexecuted_blocks=1 00:05:05.650 00:05:05.650 ' 00:05:05.650 04:08:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.650 --rc genhtml_branch_coverage=1 00:05:05.650 --rc genhtml_function_coverage=1 00:05:05.650 --rc genhtml_legend=1 00:05:05.650 --rc geninfo_all_blocks=1 00:05:05.650 --rc geninfo_unexecuted_blocks=1 00:05:05.650 00:05:05.650 ' 00:05:05.650 04:08:18 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:05.650 04:08:18 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:05.650 04:08:18 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:05.650 04:08:18 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:05.650 04:08:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.650 04:08:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:05.650 04:08:18 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:05.650 04:08:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.650 04:08:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:05.650 04:08:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:05.650 04:08:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.650 04:08:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:05.650 04:08:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:05.650 04:08:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.650 04:08:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:05.650 04:08:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:05.650 04:08:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:05.650 04:08:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.650 04:08:18 -- setup/acl.sh@12 -- # devs=() 00:05:05.650 04:08:18 -- setup/acl.sh@12 -- # declare -a devs 00:05:05.650 04:08:18 -- setup/acl.sh@13 -- # drivers=() 00:05:05.650 04:08:18 -- setup/acl.sh@13 -- # declare -A drivers 00:05:05.650 04:08:18 -- setup/acl.sh@51 -- # setup reset 00:05:05.650 04:08:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.650 04:08:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.587 04:08:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:06.587 04:08:18 -- setup/acl.sh@16 -- # local dev driver 00:05:06.587 04:08:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.587 04:08:18 -- setup/acl.sh@15 -- # setup output status 00:05:06.587 04:08:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.587 04:08:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:06.587 Hugepages 00:05:06.587 node hugesize free / total 00:05:06.587 04:08:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:06.587 04:08:19 -- setup/acl.sh@19 -- # continue 00:05:06.587 04:08:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.587 00:05:06.587 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.587 04:08:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:06.587 04:08:19 -- setup/acl.sh@19 -- # continue 00:05:06.587 04:08:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.587 04:08:19 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:06.587 04:08:19 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:06.587 04:08:19 -- setup/acl.sh@20 -- # continue 00:05:06.587 04:08:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.846 04:08:19 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:06.846 04:08:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:06.846 04:08:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:06.846 04:08:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:06.846 04:08:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:06.846 04:08:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.846 04:08:19 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:06.846 04:08:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:06.846 04:08:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:06.846 04:08:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:06.846 04:08:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:06.846 04:08:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.846 04:08:19 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:06.846 04:08:19 -- setup/acl.sh@54 -- # run_test denied denied 00:05:06.846 04:08:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.846 04:08:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.846 04:08:19 -- common/autotest_common.sh@10 -- # set +x 00:05:06.846 ************************************ 00:05:06.846 START TEST denied 00:05:06.846 ************************************ 00:05:06.846 04:08:19 -- common/autotest_common.sh@1114 -- # denied 00:05:06.846 04:08:19 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:06.846 04:08:19 -- setup/acl.sh@38 -- # setup output config 00:05:06.846 04:08:19 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:06.846 04:08:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.846 04:08:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.780 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:07.780 04:08:20 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:07.780 04:08:20 -- setup/acl.sh@28 -- # local dev driver 00:05:07.780 04:08:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:07.780 04:08:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:07.780 04:08:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:07.780 04:08:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:07.780 04:08:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:07.780 04:08:20 -- setup/acl.sh@41 -- # setup reset 00:05:07.780 04:08:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.780 04:08:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.346 00:05:08.346 real 0m1.478s 00:05:08.346 user 0m0.588s 00:05:08.346 sys 0m0.826s 00:05:08.346 04:08:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.346 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.346 ************************************ 00:05:08.346 END TEST denied 00:05:08.346 ************************************ 00:05:08.346 04:08:20 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:08.346 04:08:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.346 04:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.346 04:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:08.346 ************************************ 00:05:08.346 START TEST allowed 00:05:08.346 ************************************ 00:05:08.346 04:08:20 -- common/autotest_common.sh@1114 -- # allowed 00:05:08.346 04:08:20 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.346 04:08:20 -- setup/acl.sh@45 -- # setup output config 00:05:08.346 04:08:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.346 04:08:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.346 04:08:20 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:09.279 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.279 04:08:21 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:09.279 04:08:21 -- setup/acl.sh@28 -- # local dev driver 00:05:09.279 04:08:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:09.279 04:08:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:09.279 04:08:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:09.279 04:08:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:09.279 04:08:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:09.279 04:08:21 -- setup/acl.sh@48 -- # setup reset 00:05:09.279 04:08:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.279 04:08:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.845 00:05:09.845 real 0m1.534s 00:05:09.845 user 0m0.702s 00:05:09.845 sys 0m0.837s 00:05:09.845 04:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.845 04:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.845 ************************************ 00:05:09.845 END TEST allowed 00:05:09.845 ************************************ 00:05:09.845 00:05:09.845 real 0m4.379s 00:05:09.845 user 0m1.919s 00:05:09.845 sys 0m2.426s 00:05:09.845 04:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.845 04:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.846 ************************************ 00:05:09.846 END TEST acl 00:05:09.846 ************************************ 00:05:10.105 04:08:22 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:10.105 04:08:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.105 04:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.105 04:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:10.105 ************************************ 00:05:10.105 START TEST hugepages 00:05:10.106 ************************************ 00:05:10.106 04:08:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:10.106 * Looking for test storage... 00:05:10.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.106 04:08:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.106 04:08:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.106 04:08:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.106 04:08:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.106 04:08:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.106 04:08:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.106 04:08:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.106 04:08:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.106 04:08:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.106 04:08:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.106 04:08:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.106 04:08:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.106 04:08:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.106 04:08:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.106 04:08:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.106 04:08:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.106 04:08:22 -- scripts/common.sh@344 -- # : 1 00:05:10.106 04:08:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.106 04:08:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.106 04:08:22 -- scripts/common.sh@364 -- # decimal 1 00:05:10.106 04:08:22 -- scripts/common.sh@352 -- # local d=1 00:05:10.106 04:08:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.106 04:08:22 -- scripts/common.sh@354 -- # echo 1 00:05:10.106 04:08:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.106 04:08:22 -- scripts/common.sh@365 -- # decimal 2 00:05:10.106 04:08:22 -- scripts/common.sh@352 -- # local d=2 00:05:10.106 04:08:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.106 04:08:22 -- scripts/common.sh@354 -- # echo 2 00:05:10.106 04:08:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.106 04:08:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.106 04:08:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.106 04:08:22 -- scripts/common.sh@367 -- # return 0 00:05:10.106 04:08:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.106 04:08:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.106 --rc genhtml_branch_coverage=1 00:05:10.106 --rc genhtml_function_coverage=1 00:05:10.106 --rc genhtml_legend=1 00:05:10.106 --rc geninfo_all_blocks=1 00:05:10.106 --rc geninfo_unexecuted_blocks=1 00:05:10.106 00:05:10.106 ' 00:05:10.106 04:08:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.106 --rc genhtml_branch_coverage=1 00:05:10.106 --rc genhtml_function_coverage=1 00:05:10.106 --rc genhtml_legend=1 00:05:10.106 --rc geninfo_all_blocks=1 00:05:10.106 --rc geninfo_unexecuted_blocks=1 00:05:10.106 00:05:10.106 ' 00:05:10.106 04:08:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.106 --rc genhtml_branch_coverage=1 00:05:10.106 --rc genhtml_function_coverage=1 00:05:10.106 --rc genhtml_legend=1 00:05:10.106 --rc geninfo_all_blocks=1 00:05:10.106 --rc geninfo_unexecuted_blocks=1 00:05:10.106 00:05:10.106 ' 00:05:10.106 04:08:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.106 --rc genhtml_branch_coverage=1 00:05:10.106 --rc genhtml_function_coverage=1 00:05:10.106 --rc genhtml_legend=1 00:05:10.106 --rc geninfo_all_blocks=1 00:05:10.106 --rc geninfo_unexecuted_blocks=1 00:05:10.106 00:05:10.106 ' 00:05:10.106 04:08:22 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:10.106 04:08:22 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:10.106 04:08:22 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:10.106 04:08:22 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:10.106 04:08:22 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:10.106 04:08:22 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:10.106 04:08:22 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:10.106 04:08:22 -- setup/common.sh@18 -- # local node= 00:05:10.106 04:08:22 -- setup/common.sh@19 -- # local var val 00:05:10.106 04:08:22 -- setup/common.sh@20 -- # local mem_f mem 00:05:10.106 04:08:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.106 04:08:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.106 04:08:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.106 04:08:22 -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.106 04:08:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 4550152 kB' 'MemAvailable: 7343484 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 454720 kB' 'Inactive: 2661204 kB' 'Active(anon): 126860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661204 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118488 kB' 'Mapped: 50756 kB' 'Shmem: 10512 kB' 'KReclaimable: 82840 kB' 'Slab: 182464 kB' 'SReclaimable: 82840 kB' 'SUnreclaim: 99624 kB' 'KernelStack: 6656 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 318900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.106 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.106 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.107 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.107 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.366 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.366 04:08:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # continue 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # IFS=': ' 00:05:10.367 04:08:22 -- setup/common.sh@31 -- # read -r var val _ 00:05:10.367 04:08:22 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:10.367 04:08:22 -- setup/common.sh@33 -- # echo 2048 00:05:10.367 04:08:22 -- setup/common.sh@33 -- # return 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:10.367 04:08:22 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:10.367 04:08:22 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:10.367 04:08:22 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:10.367 04:08:22 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:10.367 04:08:22 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:10.367 04:08:22 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:10.367 04:08:22 -- setup/hugepages.sh@207 -- # get_nodes 00:05:10.367 04:08:22 -- setup/hugepages.sh@27 -- # local node 00:05:10.367 04:08:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.367 04:08:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:10.367 04:08:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.367 04:08:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.367 04:08:22 -- setup/hugepages.sh@208 -- # clear_hp 00:05:10.367 04:08:22 -- setup/hugepages.sh@37 -- # local node hp 00:05:10.367 04:08:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:10.367 04:08:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.367 04:08:22 -- setup/hugepages.sh@41 -- # echo 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:10.367 04:08:22 -- setup/hugepages.sh@41 -- # echo 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:10.367 04:08:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:10.367 04:08:22 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:10.367 04:08:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.367 04:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.367 04:08:22 -- common/autotest_common.sh@10 -- # set +x 00:05:10.367 ************************************ 00:05:10.367 START TEST default_setup 00:05:10.367 ************************************ 00:05:10.367 04:08:22 -- common/autotest_common.sh@1114 -- # default_setup 00:05:10.367 04:08:22 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:10.367 04:08:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:10.367 04:08:22 -- setup/hugepages.sh@51 -- # shift 00:05:10.367 04:08:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:10.367 04:08:22 -- setup/hugepages.sh@52 -- # local node_ids 00:05:10.367 04:08:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.367 04:08:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:10.367 04:08:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:10.367 04:08:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.367 04:08:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:10.367 04:08:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.367 04:08:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.367 04:08:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.367 04:08:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:10.367 04:08:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.367 04:08:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:10.367 04:08:22 -- setup/hugepages.sh@73 -- # return 0 00:05:10.367 04:08:22 -- setup/hugepages.sh@137 -- # setup output 00:05:10.367 04:08:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.367 04:08:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.934 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.196 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.196 04:08:23 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:11.196 04:08:23 -- setup/hugepages.sh@89 -- # local node 00:05:11.196 04:08:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.196 04:08:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.196 04:08:23 -- setup/hugepages.sh@92 -- # local surp 00:05:11.196 04:08:23 -- setup/hugepages.sh@93 -- # local resv 00:05:11.196 04:08:23 -- setup/hugepages.sh@94 -- # local anon 00:05:11.196 04:08:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.196 04:08:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.196 04:08:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.196 04:08:23 -- setup/common.sh@18 -- # local node= 00:05:11.196 04:08:23 -- setup/common.sh@19 -- # local var val 00:05:11.196 04:08:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.196 04:08:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.196 04:08:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.196 04:08:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.196 04:08:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.196 04:08:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.196 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651384 kB' 'MemAvailable: 9444560 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 2661208 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119836 kB' 'Mapped: 50884 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182084 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99564 kB' 'KernelStack: 6608 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.197 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.197 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.198 04:08:23 -- setup/common.sh@33 -- # echo 0 00:05:11.198 04:08:23 -- setup/common.sh@33 -- # return 0 00:05:11.198 04:08:23 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.198 04:08:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.198 04:08:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.198 04:08:23 -- setup/common.sh@18 -- # local node= 00:05:11.198 04:08:23 -- setup/common.sh@19 -- # local var val 00:05:11.198 04:08:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.198 04:08:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.198 04:08:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.198 04:08:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.198 04:08:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.198 04:08:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651636 kB' 'MemAvailable: 9444812 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456352 kB' 'Inactive: 2661208 kB' 'Active(anon): 128492 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119644 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182084 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99564 kB' 'KernelStack: 6656 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.198 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.198 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.199 04:08:23 -- setup/common.sh@33 -- # echo 0 00:05:11.199 04:08:23 -- setup/common.sh@33 -- # return 0 00:05:11.199 04:08:23 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.199 04:08:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.199 04:08:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.199 04:08:23 -- setup/common.sh@18 -- # local node= 00:05:11.199 04:08:23 -- setup/common.sh@19 -- # local var val 00:05:11.199 04:08:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.199 04:08:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.199 04:08:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.199 04:08:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.199 04:08:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.199 04:08:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651384 kB' 'MemAvailable: 9444560 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456060 kB' 'Inactive: 2661208 kB' 'Active(anon): 128200 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119356 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182080 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99560 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.199 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.199 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.200 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.200 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.200 04:08:23 -- setup/common.sh@33 -- # echo 0 00:05:11.200 04:08:23 -- setup/common.sh@33 -- # return 0 00:05:11.200 nr_hugepages=1024 00:05:11.200 resv_hugepages=0 00:05:11.200 surplus_hugepages=0 00:05:11.200 anon_hugepages=0 00:05:11.200 04:08:23 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.200 04:08:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:11.200 04:08:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.200 04:08:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.200 04:08:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.200 04:08:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.200 04:08:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:11.201 04:08:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.201 04:08:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.201 04:08:23 -- setup/common.sh@18 -- # local node= 00:05:11.201 04:08:23 -- setup/common.sh@19 -- # local var val 00:05:11.201 04:08:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.201 04:08:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.201 04:08:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.201 04:08:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.201 04:08:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.201 04:08:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651384 kB' 'MemAvailable: 9444560 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456248 kB' 'Inactive: 2661208 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182080 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99560 kB' 'KernelStack: 6624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.201 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.201 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.202 04:08:23 -- setup/common.sh@33 -- # echo 1024 00:05:11.202 04:08:23 -- setup/common.sh@33 -- # return 0 00:05:11.202 04:08:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:11.202 04:08:23 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.202 04:08:23 -- setup/hugepages.sh@27 -- # local node 00:05:11.202 04:08:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.202 04:08:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:11.202 04:08:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.202 04:08:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.202 04:08:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.202 04:08:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.202 04:08:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.202 04:08:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.202 04:08:23 -- setup/common.sh@18 -- # local node=0 00:05:11.202 04:08:23 -- setup/common.sh@19 -- # local var val 00:05:11.202 04:08:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.202 04:08:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.202 04:08:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.202 04:08:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.202 04:08:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.202 04:08:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651384 kB' 'MemUsed: 5587724 kB' 'SwapCached: 0 kB' 'Active: 456320 kB' 'Inactive: 2661208 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999572 kB' 'Mapped: 50756 kB' 'AnonPages: 119632 kB' 'Shmem: 10488 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82520 kB' 'Slab: 182080 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.202 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.202 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.203 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.203 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # continue 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.463 04:08:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.463 04:08:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.463 04:08:23 -- setup/common.sh@33 -- # echo 0 00:05:11.463 04:08:23 -- setup/common.sh@33 -- # return 0 00:05:11.463 04:08:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.463 04:08:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.463 04:08:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.463 node0=1024 expecting 1024 00:05:11.463 ************************************ 00:05:11.463 END TEST default_setup 00:05:11.463 ************************************ 00:05:11.463 04:08:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.463 04:08:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.463 04:08:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.463 00:05:11.463 real 0m1.050s 00:05:11.463 user 0m0.488s 00:05:11.463 sys 0m0.477s 00:05:11.463 04:08:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.463 04:08:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.463 04:08:23 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:11.463 04:08:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.463 04:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.463 04:08:23 -- common/autotest_common.sh@10 -- # set +x 00:05:11.463 ************************************ 00:05:11.463 START TEST per_node_1G_alloc 00:05:11.463 ************************************ 00:05:11.463 04:08:23 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:11.463 04:08:23 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:11.463 04:08:23 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:11.463 04:08:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:11.463 04:08:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.463 04:08:23 -- setup/hugepages.sh@51 -- # shift 00:05:11.463 04:08:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.463 04:08:23 -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.463 04:08:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.463 04:08:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:11.463 04:08:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.463 04:08:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.463 04:08:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.463 04:08:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.463 04:08:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.463 04:08:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.463 04:08:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.463 04:08:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.463 04:08:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.463 04:08:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:11.463 04:08:23 -- setup/hugepages.sh@73 -- # return 0 00:05:11.463 04:08:23 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:11.463 04:08:23 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:11.463 04:08:23 -- setup/hugepages.sh@146 -- # setup output 00:05:11.463 04:08:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.463 04:08:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.722 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.722 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.722 04:08:24 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:11.722 04:08:24 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:11.722 04:08:24 -- setup/hugepages.sh@89 -- # local node 00:05:11.722 04:08:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.722 04:08:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.722 04:08:24 -- setup/hugepages.sh@92 -- # local surp 00:05:11.722 04:08:24 -- setup/hugepages.sh@93 -- # local resv 00:05:11.722 04:08:24 -- setup/hugepages.sh@94 -- # local anon 00:05:11.722 04:08:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.722 04:08:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.722 04:08:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.722 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:11.722 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:11.722 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.722 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.722 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.722 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.722 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.722 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.722 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7704676 kB' 'MemAvailable: 10497868 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456664 kB' 'Inactive: 2661224 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119948 kB' 'Mapped: 50872 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182088 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99568 kB' 'KernelStack: 6600 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.722 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.724 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:11.724 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:11.724 04:08:24 -- setup/hugepages.sh@97 -- # anon=0 00:05:11.724 04:08:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.724 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.724 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:11.724 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:11.724 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.724 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.724 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.724 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.724 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.724 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7704424 kB' 'MemAvailable: 10497616 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456368 kB' 'Inactive: 2661224 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119592 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182080 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99560 kB' 'KernelStack: 6624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.986 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.986 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.987 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:11.987 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:11.987 04:08:24 -- setup/hugepages.sh@99 -- # surp=0 00:05:11.987 04:08:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.987 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.987 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:11.987 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:11.987 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.987 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.987 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.987 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.987 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.987 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7704944 kB' 'MemAvailable: 10498136 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456412 kB' 'Inactive: 2661224 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182076 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99556 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.987 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.987 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.988 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:11.988 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:11.988 04:08:24 -- setup/hugepages.sh@100 -- # resv=0 00:05:11.988 04:08:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.988 nr_hugepages=512 00:05:11.988 04:08:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.988 resv_hugepages=0 00:05:11.988 04:08:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.988 surplus_hugepages=0 00:05:11.988 04:08:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.988 anon_hugepages=0 00:05:11.988 04:08:24 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.988 04:08:24 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.988 04:08:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.988 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.988 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:11.988 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:11.988 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.988 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.988 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.988 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.988 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.988 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7704772 kB' 'MemAvailable: 10497964 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456332 kB' 'Inactive: 2661224 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119552 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182076 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99556 kB' 'KernelStack: 6624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.988 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.988 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.989 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.989 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.989 04:08:24 -- setup/common.sh@33 -- # echo 512 00:05:11.989 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:11.989 04:08:24 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.989 04:08:24 -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.989 04:08:24 -- setup/hugepages.sh@27 -- # local node 00:05:11.989 04:08:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.990 04:08:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.990 04:08:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.990 04:08:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.990 04:08:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.990 04:08:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.990 04:08:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.990 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.990 04:08:24 -- setup/common.sh@18 -- # local node=0 00:05:11.990 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:11.990 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:11.990 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.990 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.990 04:08:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.990 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.990 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7705292 kB' 'MemUsed: 4533816 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 2661224 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999572 kB' 'Mapped: 50756 kB' 'AnonPages: 119652 kB' 'Shmem: 10488 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82520 kB' 'Slab: 182076 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.990 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.990 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # continue 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:11.991 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:11.991 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.991 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:11.991 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:11.991 04:08:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.991 node0=512 expecting 512 00:05:11.991 ************************************ 00:05:11.991 END TEST per_node_1G_alloc 00:05:11.991 ************************************ 00:05:11.991 04:08:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.991 04:08:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.991 04:08:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.991 04:08:24 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.991 00:05:11.991 real 0m0.609s 00:05:11.991 user 0m0.311s 00:05:11.991 sys 0m0.289s 00:05:11.991 04:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.991 04:08:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.991 04:08:24 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.991 04:08:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.991 04:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.991 04:08:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.991 ************************************ 00:05:11.991 START TEST even_2G_alloc 00:05:11.991 ************************************ 00:05:11.991 04:08:24 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:11.991 04:08:24 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.991 04:08:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.991 04:08:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.991 04:08:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.991 04:08:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.991 04:08:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.991 04:08:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.991 04:08:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.991 04:08:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.991 04:08:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.991 04:08:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:11.991 04:08:24 -- setup/hugepages.sh@83 -- # : 0 00:05:11.991 04:08:24 -- setup/hugepages.sh@84 -- # : 0 00:05:11.991 04:08:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.991 04:08:24 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.991 04:08:24 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.991 04:08:24 -- setup/hugepages.sh@153 -- # setup output 00:05:11.991 04:08:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.991 04:08:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.251 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.251 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.521 04:08:24 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:12.521 04:08:24 -- setup/hugepages.sh@89 -- # local node 00:05:12.521 04:08:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.521 04:08:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.521 04:08:24 -- setup/hugepages.sh@92 -- # local surp 00:05:12.521 04:08:24 -- setup/hugepages.sh@93 -- # local resv 00:05:12.521 04:08:24 -- setup/hugepages.sh@94 -- # local anon 00:05:12.521 04:08:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.521 04:08:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.521 04:08:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.521 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:12.521 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:12.521 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.521 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.521 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.521 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.521 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.521 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.521 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.521 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6650620 kB' 'MemAvailable: 9443812 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 2661224 kB' 'Active(anon): 128796 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 50880 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182124 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99604 kB' 'KernelStack: 6632 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:12.521 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.521 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.521 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.522 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.522 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.522 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:12.522 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:12.522 04:08:24 -- setup/hugepages.sh@97 -- # anon=0 00:05:12.522 04:08:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.523 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.523 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:12.523 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:12.523 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.523 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.523 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.523 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.523 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.523 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6653196 kB' 'MemAvailable: 9446388 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456424 kB' 'Inactive: 2661224 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182144 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99624 kB' 'KernelStack: 6656 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.523 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.523 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.524 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:12.524 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:12.524 04:08:24 -- setup/hugepages.sh@99 -- # surp=0 00:05:12.524 04:08:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.524 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.524 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:12.524 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:12.524 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.524 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.524 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.524 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.524 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.524 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6653196 kB' 'MemAvailable: 9446388 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456404 kB' 'Inactive: 2661224 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182140 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99620 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.524 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.524 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.525 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.525 04:08:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.525 04:08:24 -- setup/common.sh@33 -- # echo 0 00:05:12.525 04:08:24 -- setup/common.sh@33 -- # return 0 00:05:12.525 nr_hugepages=1024 00:05:12.525 resv_hugepages=0 00:05:12.525 surplus_hugepages=0 00:05:12.525 anon_hugepages=0 00:05:12.525 04:08:24 -- setup/hugepages.sh@100 -- # resv=0 00:05:12.525 04:08:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.525 04:08:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.525 04:08:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.525 04:08:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.525 04:08:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.525 04:08:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.525 04:08:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.525 04:08:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.525 04:08:24 -- setup/common.sh@18 -- # local node= 00:05:12.525 04:08:24 -- setup/common.sh@19 -- # local var val 00:05:12.526 04:08:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.526 04:08:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.526 04:08:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.526 04:08:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.526 04:08:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.526 04:08:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654504 kB' 'MemAvailable: 9447696 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456424 kB' 'Inactive: 2661224 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119676 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182140 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99620 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.526 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.526 04:08:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:24 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.527 04:08:25 -- setup/common.sh@33 -- # echo 1024 00:05:12.527 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:12.527 04:08:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.527 04:08:25 -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.527 04:08:25 -- setup/hugepages.sh@27 -- # local node 00:05:12.527 04:08:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.527 04:08:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.527 04:08:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.527 04:08:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.527 04:08:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.527 04:08:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.527 04:08:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.527 04:08:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.527 04:08:25 -- setup/common.sh@18 -- # local node=0 00:05:12.527 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:12.527 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:12.527 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.527 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.527 04:08:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.527 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.527 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6654764 kB' 'MemUsed: 5584344 kB' 'SwapCached: 0 kB' 'Active: 456384 kB' 'Inactive: 2661224 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 2999572 kB' 'Mapped: 51112 kB' 'AnonPages: 119656 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82520 kB' 'Slab: 182136 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.527 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.527 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # continue 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:12.528 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:12.528 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.528 04:08:25 -- setup/common.sh@33 -- # echo 0 00:05:12.528 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:12.528 04:08:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.528 04:08:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.528 04:08:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.528 04:08:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.528 node0=1024 expecting 1024 00:05:12.528 04:08:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.528 04:08:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.528 00:05:12.528 real 0m0.568s 00:05:12.528 user 0m0.290s 00:05:12.528 sys 0m0.276s 00:05:12.528 04:08:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.528 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.528 ************************************ 00:05:12.528 END TEST even_2G_alloc 00:05:12.528 ************************************ 00:05:12.787 04:08:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:12.787 04:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.787 04:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.787 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:12.787 ************************************ 00:05:12.787 START TEST odd_alloc 00:05:12.787 ************************************ 00:05:12.787 04:08:25 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:12.787 04:08:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:12.787 04:08:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:12.787 04:08:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:12.787 04:08:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.787 04:08:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.787 04:08:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.787 04:08:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:12.787 04:08:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.787 04:08:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.787 04:08:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.787 04:08:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:12.787 04:08:25 -- setup/hugepages.sh@83 -- # : 0 00:05:12.787 04:08:25 -- setup/hugepages.sh@84 -- # : 0 00:05:12.787 04:08:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.787 04:08:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:12.787 04:08:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:12.787 04:08:25 -- setup/hugepages.sh@160 -- # setup output 00:05:12.787 04:08:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.787 04:08:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.049 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.049 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.049 04:08:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:13.049 04:08:25 -- setup/hugepages.sh@89 -- # local node 00:05:13.049 04:08:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.049 04:08:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.049 04:08:25 -- setup/hugepages.sh@92 -- # local surp 00:05:13.049 04:08:25 -- setup/hugepages.sh@93 -- # local resv 00:05:13.049 04:08:25 -- setup/hugepages.sh@94 -- # local anon 00:05:13.049 04:08:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.049 04:08:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.049 04:08:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.049 04:08:25 -- setup/common.sh@18 -- # local node= 00:05:13.049 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:13.050 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.050 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.050 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.050 04:08:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.050 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.050 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651740 kB' 'MemAvailable: 9444932 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456988 kB' 'Inactive: 2661224 kB' 'Active(anon): 129128 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120240 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82520 kB' 'Slab: 182108 kB' 'SReclaimable: 82520 kB' 'SUnreclaim: 99588 kB' 'KernelStack: 6584 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.050 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.050 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.051 04:08:25 -- setup/common.sh@33 -- # echo 0 00:05:13.051 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:13.051 04:08:25 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.051 04:08:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.051 04:08:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.051 04:08:25 -- setup/common.sh@18 -- # local node= 00:05:13.051 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:13.051 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.051 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.051 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.051 04:08:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.051 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.051 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651992 kB' 'MemAvailable: 9445184 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456468 kB' 'Inactive: 2661224 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119748 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182180 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99664 kB' 'KernelStack: 6672 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.051 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.051 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.052 04:08:25 -- setup/common.sh@33 -- # echo 0 00:05:13.052 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:13.052 04:08:25 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.052 04:08:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.052 04:08:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.052 04:08:25 -- setup/common.sh@18 -- # local node= 00:05:13.052 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:13.052 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.052 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.052 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.052 04:08:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.052 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.052 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651992 kB' 'MemAvailable: 9445184 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456400 kB' 'Inactive: 2661224 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182164 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99648 kB' 'KernelStack: 6640 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.052 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.052 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.053 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.053 04:08:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.054 04:08:25 -- setup/common.sh@33 -- # echo 0 00:05:13.054 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:13.054 nr_hugepages=1025 00:05:13.054 resv_hugepages=0 00:05:13.054 surplus_hugepages=0 00:05:13.054 anon_hugepages=0 00:05:13.054 04:08:25 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.054 04:08:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:13.054 04:08:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.054 04:08:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.054 04:08:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.054 04:08:25 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.054 04:08:25 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:13.054 04:08:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.054 04:08:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.054 04:08:25 -- setup/common.sh@18 -- # local node= 00:05:13.054 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:13.054 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.054 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.054 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.054 04:08:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.054 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.054 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6651992 kB' 'MemAvailable: 9445184 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 456320 kB' 'Inactive: 2661224 kB' 'Active(anon): 128460 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182144 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6656 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.054 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.054 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.055 04:08:25 -- setup/common.sh@33 -- # echo 1025 00:05:13.055 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:13.055 04:08:25 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.055 04:08:25 -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.055 04:08:25 -- setup/hugepages.sh@27 -- # local node 00:05:13.055 04:08:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.055 04:08:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:13.055 04:08:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.055 04:08:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.055 04:08:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.055 04:08:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.055 04:08:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.055 04:08:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.055 04:08:25 -- setup/common.sh@18 -- # local node=0 00:05:13.055 04:08:25 -- setup/common.sh@19 -- # local var val 00:05:13.055 04:08:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.055 04:08:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.055 04:08:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.055 04:08:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.055 04:08:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.055 04:08:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6652548 kB' 'MemUsed: 5586560 kB' 'SwapCached: 0 kB' 'Active: 456368 kB' 'Inactive: 2661224 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999572 kB' 'Mapped: 50756 kB' 'AnonPages: 119624 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82516 kB' 'Slab: 182124 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.055 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.055 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.056 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.056 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.314 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.314 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # continue 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.315 04:08:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.315 04:08:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.315 04:08:25 -- setup/common.sh@33 -- # echo 0 00:05:13.315 04:08:25 -- setup/common.sh@33 -- # return 0 00:05:13.315 04:08:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.315 node0=1025 expecting 1025 00:05:13.315 ************************************ 00:05:13.315 END TEST odd_alloc 00:05:13.315 ************************************ 00:05:13.315 04:08:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.315 04:08:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.315 04:08:25 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:13.315 04:08:25 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:13.315 00:05:13.315 real 0m0.525s 00:05:13.315 user 0m0.243s 00:05:13.315 sys 0m0.293s 00:05:13.315 04:08:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.315 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.315 04:08:25 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:13.315 04:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.315 04:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.315 04:08:25 -- common/autotest_common.sh@10 -- # set +x 00:05:13.315 ************************************ 00:05:13.315 START TEST custom_alloc 00:05:13.315 ************************************ 00:05:13.315 04:08:25 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:13.315 04:08:25 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:13.315 04:08:25 -- setup/hugepages.sh@169 -- # local node 00:05:13.315 04:08:25 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:13.315 04:08:25 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:13.315 04:08:25 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:13.315 04:08:25 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:13.315 04:08:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.315 04:08:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.315 04:08:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.315 04:08:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.315 04:08:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.315 04:08:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.315 04:08:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.315 04:08:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@83 -- # : 0 00:05:13.315 04:08:25 -- setup/hugepages.sh@84 -- # : 0 00:05:13.315 04:08:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.315 04:08:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.315 04:08:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:13.315 04:08:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.315 04:08:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.315 04:08:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.315 04:08:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.315 04:08:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.315 04:08:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:13.315 04:08:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.315 04:08:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.315 04:08:25 -- setup/hugepages.sh@78 -- # return 0 00:05:13.315 04:08:25 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:13.315 04:08:25 -- setup/hugepages.sh@187 -- # setup output 00:05:13.315 04:08:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.315 04:08:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.575 04:08:26 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:13.575 04:08:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:13.575 04:08:26 -- setup/hugepages.sh@89 -- # local node 00:05:13.575 04:08:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.575 04:08:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.575 04:08:26 -- setup/hugepages.sh@92 -- # local surp 00:05:13.575 04:08:26 -- setup/hugepages.sh@93 -- # local resv 00:05:13.575 04:08:26 -- setup/hugepages.sh@94 -- # local anon 00:05:13.575 04:08:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.576 04:08:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.576 04:08:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.576 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:13.576 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:13.576 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.576 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.576 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.576 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.576 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.576 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7710240 kB' 'MemAvailable: 10503432 kB' 'Buffers: 2684 kB' 'Cached: 2996888 kB' 'SwapCached: 0 kB' 'Active: 457316 kB' 'Inactive: 2661224 kB' 'Active(anon): 129456 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120440 kB' 'Mapped: 50872 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182180 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99664 kB' 'KernelStack: 6692 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 319644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.576 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.576 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.577 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:13.577 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:13.577 04:08:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:13.577 04:08:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.577 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.577 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:13.577 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:13.577 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.577 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.577 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.577 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.577 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.577 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7710060 kB' 'MemAvailable: 10503252 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456300 kB' 'Inactive: 2661224 kB' 'Active(anon): 128440 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119644 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182216 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99700 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.577 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.577 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.578 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.578 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.579 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:13.579 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:13.579 04:08:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:13.579 04:08:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.579 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.579 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:13.579 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:13.579 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.579 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.579 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.579 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.579 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.579 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7710060 kB' 'MemAvailable: 10503252 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456144 kB' 'Inactive: 2661224 kB' 'Active(anon): 128284 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119436 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182212 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99696 kB' 'KernelStack: 6624 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55320 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.579 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.579 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.580 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.580 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.580 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:13.580 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:13.580 04:08:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:13.580 nr_hugepages=512 00:05:13.580 04:08:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:13.580 resv_hugepages=0 00:05:13.580 04:08:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.580 surplus_hugepages=0 00:05:13.580 04:08:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.580 anon_hugepages=0 00:05:13.580 04:08:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.580 04:08:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:13.580 04:08:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:13.841 04:08:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.841 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.841 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:13.841 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:13.841 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.841 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.841 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.841 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.841 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.841 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7710060 kB' 'MemAvailable: 10503252 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456436 kB' 'Inactive: 2661224 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119724 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182212 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99696 kB' 'KernelStack: 6640 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 320012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.841 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.841 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.842 04:08:26 -- setup/common.sh@33 -- # echo 512 00:05:13.842 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:13.842 04:08:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:13.842 04:08:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.842 04:08:26 -- setup/hugepages.sh@27 -- # local node 00:05:13.842 04:08:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.842 04:08:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.842 04:08:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.842 04:08:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.842 04:08:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.842 04:08:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.842 04:08:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.842 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.842 04:08:26 -- setup/common.sh@18 -- # local node=0 00:05:13.842 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:13.842 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:13.842 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.842 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.842 04:08:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.842 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.842 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 7710060 kB' 'MemUsed: 4529048 kB' 'SwapCached: 0 kB' 'Active: 456152 kB' 'Inactive: 2661224 kB' 'Active(anon): 128292 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999576 kB' 'Mapped: 50756 kB' 'AnonPages: 119444 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82516 kB' 'Slab: 182212 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.842 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.842 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # continue 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:13.843 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:13.843 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.843 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:13.843 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:13.843 04:08:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.843 04:08:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.843 04:08:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.843 04:08:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.843 node0=512 expecting 512 00:05:13.843 04:08:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:13.843 04:08:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:13.843 00:05:13.843 real 0m0.524s 00:05:13.843 user 0m0.273s 00:05:13.843 sys 0m0.281s 00:05:13.843 04:08:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.843 04:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 ************************************ 00:05:13.843 END TEST custom_alloc 00:05:13.843 ************************************ 00:05:13.843 04:08:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:13.843 04:08:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.843 04:08:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.843 04:08:26 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 ************************************ 00:05:13.843 START TEST no_shrink_alloc 00:05:13.843 ************************************ 00:05:13.843 04:08:26 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:13.843 04:08:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:13.843 04:08:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.843 04:08:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.843 04:08:26 -- setup/hugepages.sh@51 -- # shift 00:05:13.843 04:08:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.843 04:08:26 -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.843 04:08:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.843 04:08:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.843 04:08:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.843 04:08:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.843 04:08:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.843 04:08:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.843 04:08:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.843 04:08:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.843 04:08:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.843 04:08:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.843 04:08:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.843 04:08:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.843 04:08:26 -- setup/hugepages.sh@73 -- # return 0 00:05:13.843 04:08:26 -- setup/hugepages.sh@198 -- # setup output 00:05:13.843 04:08:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.843 04:08:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.103 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.103 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.103 04:08:26 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:14.103 04:08:26 -- setup/hugepages.sh@89 -- # local node 00:05:14.103 04:08:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.103 04:08:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.103 04:08:26 -- setup/hugepages.sh@92 -- # local surp 00:05:14.103 04:08:26 -- setup/hugepages.sh@93 -- # local resv 00:05:14.103 04:08:26 -- setup/hugepages.sh@94 -- # local anon 00:05:14.103 04:08:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.103 04:08:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.103 04:08:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.103 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:14.103 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:14.103 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.103 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.103 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.103 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.103 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.103 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6662532 kB' 'MemAvailable: 9455728 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456380 kB' 'Inactive: 2661228 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119876 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182136 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99620 kB' 'KernelStack: 6580 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.103 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.103 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.104 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:14.104 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:14.104 04:08:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:14.104 04:08:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.104 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.104 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:14.104 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:14.104 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.104 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.104 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.104 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.104 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.104 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6662532 kB' 'MemAvailable: 9455728 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456708 kB' 'Inactive: 2661228 kB' 'Active(anon): 128848 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182152 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99636 kB' 'KernelStack: 6656 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.104 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.104 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.105 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.105 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.368 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.368 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.369 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:14.369 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:14.369 04:08:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:14.369 04:08:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.369 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.369 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:14.369 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:14.369 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.369 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.369 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.369 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.369 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.369 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6662532 kB' 'MemAvailable: 9455728 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456196 kB' 'Inactive: 2661228 kB' 'Active(anon): 128336 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182148 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99632 kB' 'KernelStack: 6640 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.369 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.369 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.370 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:14.370 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:14.370 04:08:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:14.370 nr_hugepages=1024 00:05:14.370 04:08:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.370 resv_hugepages=0 00:05:14.370 04:08:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.370 surplus_hugepages=0 00:05:14.370 04:08:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.370 anon_hugepages=0 00:05:14.370 04:08:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.370 04:08:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.370 04:08:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.370 04:08:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.370 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.370 04:08:26 -- setup/common.sh@18 -- # local node= 00:05:14.370 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:14.370 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.370 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.370 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.370 04:08:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.370 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.370 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6662532 kB' 'MemAvailable: 9455728 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456436 kB' 'Inactive: 2661228 kB' 'Active(anon): 128576 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182144 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99628 kB' 'KernelStack: 6640 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.370 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.370 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.371 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.371 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.372 04:08:26 -- setup/common.sh@33 -- # echo 1024 00:05:14.372 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:14.372 04:08:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.372 04:08:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.372 04:08:26 -- setup/hugepages.sh@27 -- # local node 00:05:14.372 04:08:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.372 04:08:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.372 04:08:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.372 04:08:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.372 04:08:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.372 04:08:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.372 04:08:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.372 04:08:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.372 04:08:26 -- setup/common.sh@18 -- # local node=0 00:05:14.372 04:08:26 -- setup/common.sh@19 -- # local var val 00:05:14.372 04:08:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.372 04:08:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.372 04:08:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.372 04:08:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.372 04:08:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.372 04:08:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6662532 kB' 'MemUsed: 5576576 kB' 'SwapCached: 0 kB' 'Active: 456428 kB' 'Inactive: 2661228 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999576 kB' 'Mapped: 50756 kB' 'AnonPages: 119660 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82516 kB' 'Slab: 182144 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.372 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.372 04:08:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # continue 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.373 04:08:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.373 04:08:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.373 04:08:26 -- setup/common.sh@33 -- # echo 0 00:05:14.373 04:08:26 -- setup/common.sh@33 -- # return 0 00:05:14.373 04:08:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.373 04:08:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.373 04:08:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.373 04:08:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.373 node0=1024 expecting 1024 00:05:14.373 04:08:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.373 04:08:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.373 04:08:26 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:14.373 04:08:26 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:14.373 04:08:26 -- setup/hugepages.sh@202 -- # setup output 00:05:14.373 04:08:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.373 04:08:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.633 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.633 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.633 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:14.633 04:08:27 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:14.633 04:08:27 -- setup/hugepages.sh@89 -- # local node 00:05:14.633 04:08:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.633 04:08:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.633 04:08:27 -- setup/hugepages.sh@92 -- # local surp 00:05:14.633 04:08:27 -- setup/hugepages.sh@93 -- # local resv 00:05:14.633 04:08:27 -- setup/hugepages.sh@94 -- # local anon 00:05:14.633 04:08:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.633 04:08:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.633 04:08:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.633 04:08:27 -- setup/common.sh@18 -- # local node= 00:05:14.633 04:08:27 -- setup/common.sh@19 -- # local var val 00:05:14.633 04:08:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.633 04:08:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.633 04:08:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.633 04:08:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.633 04:08:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.633 04:08:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6663476 kB' 'MemAvailable: 9456672 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456568 kB' 'Inactive: 2661228 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 50940 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182168 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99652 kB' 'KernelStack: 6632 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55368 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.633 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.633 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.634 04:08:27 -- setup/common.sh@33 -- # echo 0 00:05:14.634 04:08:27 -- setup/common.sh@33 -- # return 0 00:05:14.634 04:08:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:14.634 04:08:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.634 04:08:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.634 04:08:27 -- setup/common.sh@18 -- # local node= 00:05:14.634 04:08:27 -- setup/common.sh@19 -- # local var val 00:05:14.634 04:08:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.634 04:08:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.634 04:08:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.634 04:08:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.634 04:08:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.634 04:08:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.634 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.634 04:08:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6663476 kB' 'MemAvailable: 9456672 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456156 kB' 'Inactive: 2661228 kB' 'Active(anon): 128296 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50788 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182200 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99684 kB' 'KernelStack: 6640 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.896 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.896 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.897 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.897 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.898 04:08:27 -- setup/common.sh@33 -- # echo 0 00:05:14.898 04:08:27 -- setup/common.sh@33 -- # return 0 00:05:14.898 04:08:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:14.898 04:08:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.898 04:08:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.898 04:08:27 -- setup/common.sh@18 -- # local node= 00:05:14.898 04:08:27 -- setup/common.sh@19 -- # local var val 00:05:14.898 04:08:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.898 04:08:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.898 04:08:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.898 04:08:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.898 04:08:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.898 04:08:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6663476 kB' 'MemAvailable: 9456672 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456072 kB' 'Inactive: 2661228 kB' 'Active(anon): 128212 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182200 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99684 kB' 'KernelStack: 6624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.898 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.898 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.899 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.899 04:08:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.900 04:08:27 -- setup/common.sh@33 -- # echo 0 00:05:14.900 04:08:27 -- setup/common.sh@33 -- # return 0 00:05:14.900 04:08:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:14.900 04:08:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.900 nr_hugepages=1024 00:05:14.900 resv_hugepages=0 00:05:14.900 04:08:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.900 surplus_hugepages=0 00:05:14.900 04:08:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.900 anon_hugepages=0 00:05:14.900 04:08:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.900 04:08:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.900 04:08:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.900 04:08:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.900 04:08:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.900 04:08:27 -- setup/common.sh@18 -- # local node= 00:05:14.900 04:08:27 -- setup/common.sh@19 -- # local var val 00:05:14.900 04:08:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.900 04:08:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.900 04:08:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.900 04:08:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.900 04:08:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.900 04:08:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6663812 kB' 'MemAvailable: 9457008 kB' 'Buffers: 2684 kB' 'Cached: 2996892 kB' 'SwapCached: 0 kB' 'Active: 456068 kB' 'Inactive: 2661228 kB' 'Active(anon): 128208 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119668 kB' 'Mapped: 50756 kB' 'Shmem: 10488 kB' 'KReclaimable: 82516 kB' 'Slab: 182200 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99684 kB' 'KernelStack: 6624 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 320344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.900 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.900 04:08:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.901 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.901 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.902 04:08:27 -- setup/common.sh@33 -- # echo 1024 00:05:14.902 04:08:27 -- setup/common.sh@33 -- # return 0 00:05:14.902 04:08:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.902 04:08:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.902 04:08:27 -- setup/hugepages.sh@27 -- # local node 00:05:14.902 04:08:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.902 04:08:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.902 04:08:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.902 04:08:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.902 04:08:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.902 04:08:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.902 04:08:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.902 04:08:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.902 04:08:27 -- setup/common.sh@18 -- # local node=0 00:05:14.902 04:08:27 -- setup/common.sh@19 -- # local var val 00:05:14.902 04:08:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.902 04:08:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.902 04:08:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.902 04:08:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.902 04:08:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.902 04:08:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239108 kB' 'MemFree: 6664488 kB' 'MemUsed: 5574620 kB' 'SwapCached: 0 kB' 'Active: 453740 kB' 'Inactive: 2661228 kB' 'Active(anon): 125880 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2661228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2999576 kB' 'Mapped: 49908 kB' 'AnonPages: 117320 kB' 'Shmem: 10488 kB' 'KernelStack: 6544 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82516 kB' 'Slab: 182116 kB' 'SReclaimable: 82516 kB' 'SUnreclaim: 99600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.902 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.902 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.903 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.903 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # continue 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.904 04:08:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.904 04:08:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.904 04:08:27 -- setup/common.sh@33 -- # echo 0 00:05:14.904 04:08:27 -- setup/common.sh@33 -- # return 0 00:05:14.904 04:08:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.904 04:08:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.904 04:08:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.904 04:08:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.904 node0=1024 expecting 1024 00:05:14.904 04:08:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.904 04:08:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.904 00:05:14.904 real 0m1.070s 00:05:14.904 user 0m0.545s 00:05:14.904 sys 0m0.565s 00:05:14.904 04:08:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.904 04:08:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.904 ************************************ 00:05:14.904 END TEST no_shrink_alloc 00:05:14.904 ************************************ 00:05:14.904 04:08:27 -- setup/hugepages.sh@217 -- # clear_hp 00:05:14.904 04:08:27 -- setup/hugepages.sh@37 -- # local node hp 00:05:14.904 04:08:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.904 04:08:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.904 04:08:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.904 04:08:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.904 04:08:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:14.904 04:08:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.904 04:08:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.904 00:05:14.904 real 0m4.906s 00:05:14.904 user 0m2.390s 00:05:14.904 sys 0m2.460s 00:05:14.904 04:08:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.904 04:08:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.904 ************************************ 00:05:14.904 END TEST hugepages 00:05:14.904 ************************************ 00:05:14.904 04:08:27 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:14.904 04:08:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.904 04:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.904 04:08:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.904 ************************************ 00:05:14.904 START TEST driver 00:05:14.904 ************************************ 00:05:14.904 04:08:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:15.163 * Looking for test storage... 00:05:15.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:15.163 04:08:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:15.163 04:08:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:15.163 04:08:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:15.163 04:08:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:15.163 04:08:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:15.163 04:08:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:15.163 04:08:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:15.163 04:08:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:15.163 04:08:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:15.163 04:08:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.163 04:08:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:15.163 04:08:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:15.163 04:08:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:15.163 04:08:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:15.163 04:08:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:15.163 04:08:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:15.163 04:08:27 -- scripts/common.sh@344 -- # : 1 00:05:15.163 04:08:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:15.163 04:08:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.163 04:08:27 -- scripts/common.sh@364 -- # decimal 1 00:05:15.163 04:08:27 -- scripts/common.sh@352 -- # local d=1 00:05:15.163 04:08:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.163 04:08:27 -- scripts/common.sh@354 -- # echo 1 00:05:15.163 04:08:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:15.163 04:08:27 -- scripts/common.sh@365 -- # decimal 2 00:05:15.163 04:08:27 -- scripts/common.sh@352 -- # local d=2 00:05:15.163 04:08:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.163 04:08:27 -- scripts/common.sh@354 -- # echo 2 00:05:15.163 04:08:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:15.163 04:08:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:15.163 04:08:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:15.163 04:08:27 -- scripts/common.sh@367 -- # return 0 00:05:15.163 04:08:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.163 04:08:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:15.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.163 --rc genhtml_branch_coverage=1 00:05:15.163 --rc genhtml_function_coverage=1 00:05:15.163 --rc genhtml_legend=1 00:05:15.163 --rc geninfo_all_blocks=1 00:05:15.163 --rc geninfo_unexecuted_blocks=1 00:05:15.163 00:05:15.163 ' 00:05:15.163 04:08:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:15.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.163 --rc genhtml_branch_coverage=1 00:05:15.163 --rc genhtml_function_coverage=1 00:05:15.163 --rc genhtml_legend=1 00:05:15.163 --rc geninfo_all_blocks=1 00:05:15.163 --rc geninfo_unexecuted_blocks=1 00:05:15.163 00:05:15.163 ' 00:05:15.163 04:08:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:15.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.163 --rc genhtml_branch_coverage=1 00:05:15.163 --rc genhtml_function_coverage=1 00:05:15.163 --rc genhtml_legend=1 00:05:15.163 --rc geninfo_all_blocks=1 00:05:15.163 --rc geninfo_unexecuted_blocks=1 00:05:15.163 00:05:15.163 ' 00:05:15.163 04:08:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:15.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.163 --rc genhtml_branch_coverage=1 00:05:15.163 --rc genhtml_function_coverage=1 00:05:15.163 --rc genhtml_legend=1 00:05:15.163 --rc geninfo_all_blocks=1 00:05:15.163 --rc geninfo_unexecuted_blocks=1 00:05:15.163 00:05:15.163 ' 00:05:15.163 04:08:27 -- setup/driver.sh@68 -- # setup reset 00:05:15.163 04:08:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.163 04:08:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.732 04:08:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:15.732 04:08:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.732 04:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.732 04:08:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.732 ************************************ 00:05:15.732 START TEST guess_driver 00:05:15.732 ************************************ 00:05:15.732 04:08:28 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:15.732 04:08:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:15.732 04:08:28 -- setup/driver.sh@47 -- # local fail=0 00:05:15.732 04:08:28 -- setup/driver.sh@49 -- # pick_driver 00:05:15.732 04:08:28 -- setup/driver.sh@36 -- # vfio 00:05:15.732 04:08:28 -- setup/driver.sh@21 -- # local iommu_grups 00:05:15.732 04:08:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:15.732 04:08:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:15.732 04:08:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:15.732 04:08:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:15.732 04:08:28 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:15.732 04:08:28 -- setup/driver.sh@32 -- # return 1 00:05:15.732 04:08:28 -- setup/driver.sh@38 -- # uio 00:05:15.732 04:08:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:15.732 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:15.732 04:08:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:15.732 Looking for driver=uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:15.732 04:08:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:15.732 04:08:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:15.732 04:08:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.732 04:08:28 -- setup/driver.sh@45 -- # setup output config 00:05:15.732 04:08:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.732 04:08:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.299 04:08:28 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:16.299 04:08:28 -- setup/driver.sh@58 -- # continue 00:05:16.299 04:08:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.557 04:08:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.557 04:08:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:16.557 04:08:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.557 04:08:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.557 04:08:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:16.557 04:08:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.557 04:08:28 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:16.557 04:08:29 -- setup/driver.sh@65 -- # setup reset 00:05:16.557 04:08:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.557 04:08:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.125 00:05:17.125 real 0m1.437s 00:05:17.125 user 0m0.568s 00:05:17.125 sys 0m0.883s 00:05:17.125 04:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.125 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.125 ************************************ 00:05:17.125 END TEST guess_driver 00:05:17.125 ************************************ 00:05:17.125 00:05:17.125 real 0m2.208s 00:05:17.125 user 0m0.885s 00:05:17.125 sys 0m1.413s 00:05:17.125 04:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.125 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.125 ************************************ 00:05:17.125 END TEST driver 00:05:17.125 ************************************ 00:05:17.125 04:08:29 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.125 04:08:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.125 04:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.125 04:08:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.125 ************************************ 00:05:17.125 START TEST devices 00:05:17.125 ************************************ 00:05:17.125 04:08:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.384 * Looking for test storage... 00:05:17.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.384 04:08:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.384 04:08:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.384 04:08:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.384 04:08:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.384 04:08:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.384 04:08:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.384 04:08:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.384 04:08:29 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.384 04:08:29 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.384 04:08:29 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.384 04:08:29 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.384 04:08:29 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.384 04:08:29 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.384 04:08:29 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.384 04:08:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.384 04:08:29 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.384 04:08:29 -- scripts/common.sh@344 -- # : 1 00:05:17.384 04:08:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.384 04:08:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.384 04:08:29 -- scripts/common.sh@364 -- # decimal 1 00:05:17.384 04:08:29 -- scripts/common.sh@352 -- # local d=1 00:05:17.384 04:08:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.384 04:08:29 -- scripts/common.sh@354 -- # echo 1 00:05:17.384 04:08:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.384 04:08:29 -- scripts/common.sh@365 -- # decimal 2 00:05:17.384 04:08:29 -- scripts/common.sh@352 -- # local d=2 00:05:17.384 04:08:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.384 04:08:29 -- scripts/common.sh@354 -- # echo 2 00:05:17.384 04:08:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.384 04:08:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.384 04:08:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.384 04:08:29 -- scripts/common.sh@367 -- # return 0 00:05:17.384 04:08:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.384 04:08:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.384 --rc genhtml_branch_coverage=1 00:05:17.384 --rc genhtml_function_coverage=1 00:05:17.384 --rc genhtml_legend=1 00:05:17.384 --rc geninfo_all_blocks=1 00:05:17.384 --rc geninfo_unexecuted_blocks=1 00:05:17.384 00:05:17.384 ' 00:05:17.384 04:08:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.384 --rc genhtml_branch_coverage=1 00:05:17.384 --rc genhtml_function_coverage=1 00:05:17.384 --rc genhtml_legend=1 00:05:17.384 --rc geninfo_all_blocks=1 00:05:17.384 --rc geninfo_unexecuted_blocks=1 00:05:17.384 00:05:17.384 ' 00:05:17.384 04:08:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.384 --rc genhtml_branch_coverage=1 00:05:17.384 --rc genhtml_function_coverage=1 00:05:17.384 --rc genhtml_legend=1 00:05:17.384 --rc geninfo_all_blocks=1 00:05:17.385 --rc geninfo_unexecuted_blocks=1 00:05:17.385 00:05:17.385 ' 00:05:17.385 04:08:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.385 --rc genhtml_branch_coverage=1 00:05:17.385 --rc genhtml_function_coverage=1 00:05:17.385 --rc genhtml_legend=1 00:05:17.385 --rc geninfo_all_blocks=1 00:05:17.385 --rc geninfo_unexecuted_blocks=1 00:05:17.385 00:05:17.385 ' 00:05:17.385 04:08:29 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.385 04:08:29 -- setup/devices.sh@192 -- # setup reset 00:05:17.385 04:08:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.385 04:08:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.320 04:08:30 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.320 04:08:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:18.320 04:08:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:18.320 04:08:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:18.320 04:08:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:18.320 04:08:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:18.320 04:08:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:18.320 04:08:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:18.320 04:08:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:18.320 04:08:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:18.320 04:08:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:18.320 04:08:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:18.320 04:08:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:18.320 04:08:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:18.320 04:08:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:18.320 04:08:30 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:18.320 04:08:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:18.320 04:08:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:18.320 04:08:30 -- setup/devices.sh@196 -- # blocks=() 00:05:18.320 04:08:30 -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.320 04:08:30 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.320 04:08:30 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.320 04:08:30 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.320 04:08:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.320 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.320 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.320 04:08:30 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:18.320 04:08:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:18.320 04:08:30 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.320 04:08:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:18.320 04:08:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.320 No valid GPT data, bailing 00:05:18.320 04:08:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.320 04:08:30 -- scripts/common.sh@393 -- # pt= 00:05:18.320 04:08:30 -- scripts/common.sh@394 -- # return 1 00:05:18.320 04:08:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.320 04:08:30 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.320 04:08:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.320 04:08:30 -- setup/common.sh@80 -- # echo 5368709120 00:05:18.320 04:08:30 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:18.320 04:08:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.320 04:08:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:18.320 04:08:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.320 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:18.320 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.320 04:08:30 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.320 04:08:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.320 04:08:30 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:18.320 04:08:30 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:18.320 04:08:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:18.321 No valid GPT data, bailing 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # pt= 00:05:18.321 04:08:30 -- scripts/common.sh@394 -- # return 1 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:18.321 04:08:30 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:18.321 04:08:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:18.321 04:08:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.321 04:08:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.321 04:08:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.321 04:08:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.321 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:18.321 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.321 04:08:30 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.321 04:08:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:18.321 04:08:30 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:18.321 04:08:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:18.321 No valid GPT data, bailing 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # pt= 00:05:18.321 04:08:30 -- scripts/common.sh@394 -- # return 1 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:18.321 04:08:30 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:18.321 04:08:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:18.321 04:08:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.321 04:08:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.321 04:08:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.321 04:08:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.321 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:18.321 04:08:30 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:18.321 04:08:30 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:18.321 04:08:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:18.321 04:08:30 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:18.321 04:08:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:18.321 No valid GPT data, bailing 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:18.321 04:08:30 -- scripts/common.sh@393 -- # pt= 00:05:18.321 04:08:30 -- scripts/common.sh@394 -- # return 1 00:05:18.321 04:08:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:18.321 04:08:30 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:18.321 04:08:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:18.321 04:08:30 -- setup/common.sh@80 -- # echo 4294967296 00:05:18.666 04:08:30 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:18.666 04:08:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.666 04:08:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:18.666 04:08:30 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:18.666 04:08:30 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.666 04:08:30 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.666 04:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.666 04:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.666 04:08:30 -- common/autotest_common.sh@10 -- # set +x 00:05:18.666 ************************************ 00:05:18.666 START TEST nvme_mount 00:05:18.666 ************************************ 00:05:18.666 04:08:30 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:18.666 04:08:30 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.666 04:08:30 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.666 04:08:30 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.666 04:08:30 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.666 04:08:30 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.666 04:08:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.666 04:08:30 -- setup/common.sh@40 -- # local part_no=1 00:05:18.666 04:08:30 -- setup/common.sh@41 -- # local size=1073741824 00:05:18.666 04:08:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.666 04:08:30 -- setup/common.sh@44 -- # parts=() 00:05:18.666 04:08:30 -- setup/common.sh@44 -- # local parts 00:05:18.666 04:08:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.666 04:08:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.666 04:08:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.666 04:08:30 -- setup/common.sh@46 -- # (( part++ )) 00:05:18.666 04:08:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.666 04:08:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:18.666 04:08:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.666 04:08:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:19.600 Creating new GPT entries in memory. 00:05:19.600 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.600 other utilities. 00:05:19.600 04:08:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.600 04:08:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.600 04:08:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.600 04:08:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.600 04:08:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:20.535 Creating new GPT entries in memory. 00:05:20.535 The operation has completed successfully. 00:05:20.535 04:08:32 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.535 04:08:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.535 04:08:32 -- setup/common.sh@62 -- # wait 64114 00:05:20.535 04:08:32 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.535 04:08:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:20.535 04:08:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.535 04:08:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:20.535 04:08:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:20.535 04:08:33 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.535 04:08:33 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.535 04:08:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:20.535 04:08:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:20.535 04:08:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.535 04:08:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.535 04:08:33 -- setup/devices.sh@53 -- # local found=0 00:05:20.535 04:08:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.535 04:08:33 -- setup/devices.sh@56 -- # : 00:05:20.535 04:08:33 -- setup/devices.sh@59 -- # local pci status 00:05:20.535 04:08:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:20.535 04:08:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.535 04:08:33 -- setup/devices.sh@47 -- # setup output config 00:05:20.535 04:08:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.535 04:08:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.794 04:08:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.794 04:08:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:20.794 04:08:33 -- setup/devices.sh@63 -- # found=1 00:05:20.794 04:08:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.794 04:08:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:20.794 04:08:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.053 04:08:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.053 04:08:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.053 04:08:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.053 04:08:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.312 04:08:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.312 04:08:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:21.312 04:08:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.312 04:08:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.312 04:08:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.312 04:08:33 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:21.312 04:08:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.312 04:08:33 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.312 04:08:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.312 04:08:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:21.312 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.312 04:08:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.313 04:08:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.572 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.572 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.572 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.572 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.572 04:08:33 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:21.572 04:08:33 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:21.572 04:08:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.572 04:08:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:21.572 04:08:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:21.572 04:08:34 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.572 04:08:34 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.572 04:08:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:21.572 04:08:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:21.572 04:08:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.572 04:08:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.572 04:08:34 -- setup/devices.sh@53 -- # local found=0 00:05:21.572 04:08:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.572 04:08:34 -- setup/devices.sh@56 -- # : 00:05:21.572 04:08:34 -- setup/devices.sh@59 -- # local pci status 00:05:21.572 04:08:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:21.572 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.572 04:08:34 -- setup/devices.sh@47 -- # setup output config 00:05:21.572 04:08:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.572 04:08:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.831 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.831 04:08:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:21.831 04:08:34 -- setup/devices.sh@63 -- # found=1 00:05:21.831 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.831 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:21.831 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.090 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.090 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.090 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.090 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.349 04:08:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.349 04:08:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:22.349 04:08:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.349 04:08:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.349 04:08:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.349 04:08:34 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.349 04:08:34 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:22.349 04:08:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:22.349 04:08:34 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:22.349 04:08:34 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.349 04:08:34 -- setup/devices.sh@51 -- # local test_file= 00:05:22.349 04:08:34 -- setup/devices.sh@53 -- # local found=0 00:05:22.349 04:08:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.349 04:08:34 -- setup/devices.sh@59 -- # local pci status 00:05:22.349 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.349 04:08:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:22.349 04:08:34 -- setup/devices.sh@47 -- # setup output config 00:05:22.349 04:08:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.349 04:08:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.609 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.609 04:08:34 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:22.609 04:08:34 -- setup/devices.sh@63 -- # found=1 00:05:22.609 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.609 04:08:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.609 04:08:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.868 04:08:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.868 04:08:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.868 04:08:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:22.868 04:08:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.126 04:08:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.126 04:08:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.126 04:08:35 -- setup/devices.sh@68 -- # return 0 00:05:23.126 04:08:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:23.126 04:08:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.126 04:08:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.126 04:08:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.126 04:08:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.126 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.126 00:05:23.126 real 0m4.565s 00:05:23.126 user 0m1.050s 00:05:23.126 sys 0m1.204s 00:05:23.126 04:08:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.126 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.126 ************************************ 00:05:23.126 END TEST nvme_mount 00:05:23.126 ************************************ 00:05:23.126 04:08:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:23.126 04:08:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.126 04:08:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.126 04:08:35 -- common/autotest_common.sh@10 -- # set +x 00:05:23.126 ************************************ 00:05:23.126 START TEST dm_mount 00:05:23.126 ************************************ 00:05:23.126 04:08:35 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:23.126 04:08:35 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:23.126 04:08:35 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:23.126 04:08:35 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:23.126 04:08:35 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:23.126 04:08:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.126 04:08:35 -- setup/common.sh@40 -- # local part_no=2 00:05:23.126 04:08:35 -- setup/common.sh@41 -- # local size=1073741824 00:05:23.126 04:08:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.126 04:08:35 -- setup/common.sh@44 -- # parts=() 00:05:23.126 04:08:35 -- setup/common.sh@44 -- # local parts 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.126 04:08:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.126 04:08:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.126 04:08:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.126 04:08:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:23.126 04:08:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.126 04:08:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:24.062 Creating new GPT entries in memory. 00:05:24.062 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.062 other utilities. 00:05:24.062 04:08:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.062 04:08:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.062 04:08:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.062 04:08:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.062 04:08:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:25.441 Creating new GPT entries in memory. 00:05:25.441 The operation has completed successfully. 00:05:25.441 04:08:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.441 04:08:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.441 04:08:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.441 04:08:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.441 04:08:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:26.379 The operation has completed successfully. 00:05:26.379 04:08:38 -- setup/common.sh@57 -- # (( part++ )) 00:05:26.379 04:08:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.379 04:08:38 -- setup/common.sh@62 -- # wait 64574 00:05:26.379 04:08:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:26.379 04:08:38 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.379 04:08:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.379 04:08:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:26.379 04:08:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:26.379 04:08:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.379 04:08:38 -- setup/devices.sh@161 -- # break 00:05:26.379 04:08:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.379 04:08:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:26.379 04:08:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:26.379 04:08:38 -- setup/devices.sh@166 -- # dm=dm-0 00:05:26.379 04:08:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:26.379 04:08:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:26.379 04:08:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.379 04:08:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:26.379 04:08:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.379 04:08:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.379 04:08:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:26.379 04:08:38 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.379 04:08:38 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.379 04:08:38 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.379 04:08:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:26.379 04:08:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.379 04:08:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.379 04:08:38 -- setup/devices.sh@53 -- # local found=0 00:05:26.379 04:08:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.379 04:08:38 -- setup/devices.sh@56 -- # : 00:05:26.379 04:08:38 -- setup/devices.sh@59 -- # local pci status 00:05:26.379 04:08:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.379 04:08:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.379 04:08:38 -- setup/devices.sh@47 -- # setup output config 00:05:26.379 04:08:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.379 04:08:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.379 04:08:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.379 04:08:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:26.379 04:08:38 -- setup/devices.sh@63 -- # found=1 00:05:26.379 04:08:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.379 04:08:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.379 04:08:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.639 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.639 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.898 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.898 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.898 04:08:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.898 04:08:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:26.898 04:08:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.898 04:08:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.898 04:08:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:26.898 04:08:39 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.898 04:08:39 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:26.898 04:08:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.898 04:08:39 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:26.898 04:08:39 -- setup/devices.sh@50 -- # local mount_point= 00:05:26.898 04:08:39 -- setup/devices.sh@51 -- # local test_file= 00:05:26.898 04:08:39 -- setup/devices.sh@53 -- # local found=0 00:05:26.898 04:08:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:26.898 04:08:39 -- setup/devices.sh@59 -- # local pci status 00:05:26.898 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.898 04:08:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.898 04:08:39 -- setup/devices.sh@47 -- # setup output config 00:05:26.898 04:08:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.898 04:08:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.157 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.157 04:08:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:27.157 04:08:39 -- setup/devices.sh@63 -- # found=1 00:05:27.157 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.157 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.157 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.417 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.417 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.417 04:08:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.417 04:08:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.676 04:08:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.676 04:08:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.676 04:08:40 -- setup/devices.sh@68 -- # return 0 00:05:27.676 04:08:40 -- setup/devices.sh@187 -- # cleanup_dm 00:05:27.676 04:08:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.676 04:08:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:27.676 04:08:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:27.676 04:08:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.676 04:08:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:27.676 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.676 04:08:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:27.676 04:08:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:27.676 00:05:27.676 real 0m4.594s 00:05:27.676 user 0m0.676s 00:05:27.676 sys 0m0.851s 00:05:27.676 04:08:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.676 ************************************ 00:05:27.676 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.676 END TEST dm_mount 00:05:27.676 ************************************ 00:05:27.676 04:08:40 -- setup/devices.sh@1 -- # cleanup 00:05:27.676 04:08:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:27.676 04:08:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.676 04:08:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.676 04:08:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:27.676 04:08:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.676 04:08:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.935 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.935 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.935 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.935 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.935 04:08:40 -- setup/devices.sh@12 -- # cleanup_dm 00:05:27.935 04:08:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:27.935 04:08:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:27.935 04:08:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.935 04:08:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:27.935 04:08:40 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.935 04:08:40 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:27.935 00:05:27.935 real 0m10.804s 00:05:27.935 user 0m2.473s 00:05:27.935 sys 0m2.665s 00:05:27.935 04:08:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.935 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:05:27.935 ************************************ 00:05:27.935 END TEST devices 00:05:27.935 ************************************ 00:05:28.193 00:05:28.193 real 0m22.686s 00:05:28.193 user 0m7.852s 00:05:28.193 sys 0m9.157s 00:05:28.193 04:08:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.193 04:08:40 -- common/autotest_common.sh@10 -- # set +x 00:05:28.193 ************************************ 00:05:28.193 END TEST setup.sh 00:05:28.193 ************************************ 00:05:28.193 04:08:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:28.193 Hugepages 00:05:28.193 node hugesize free / total 00:05:28.193 node0 1048576kB 0 / 0 00:05:28.193 node0 2048kB 2048 / 2048 00:05:28.193 00:05:28.193 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:28.452 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:28.452 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:28.452 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:28.452 04:08:40 -- spdk/autotest.sh@128 -- # uname -s 00:05:28.452 04:08:40 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:28.452 04:08:40 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:28.452 04:08:40 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:29.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.277 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.277 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.277 04:08:41 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:30.662 04:08:42 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:30.662 04:08:42 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:30.662 04:08:42 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:30.662 04:08:42 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:30.662 04:08:42 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:30.662 04:08:42 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:30.662 04:08:42 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.662 04:08:42 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:30.662 04:08:42 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:30.662 04:08:42 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:30.662 04:08:42 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:30.662 04:08:42 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.936 Waiting for block devices as requested 00:05:30.936 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.936 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:31.195 04:08:43 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.195 04:08:43 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.195 04:08:43 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.195 04:08:43 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.195 04:08:43 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1552 -- # continue 00:05:31.195 04:08:43 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:31.195 04:08:43 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:31.195 04:08:43 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:31.195 04:08:43 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:31.195 04:08:43 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:31.195 04:08:43 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:31.195 04:08:43 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:31.195 04:08:43 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:31.195 04:08:43 -- common/autotest_common.sh@1552 -- # continue 00:05:31.195 04:08:43 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:31.195 04:08:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.195 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.195 04:08:43 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:31.195 04:08:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.195 04:08:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.195 04:08:43 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.131 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.131 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.131 04:08:44 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:32.131 04:08:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.131 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.131 04:08:44 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:32.131 04:08:44 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:32.131 04:08:44 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:32.131 04:08:44 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:32.131 04:08:44 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:32.131 04:08:44 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:32.131 04:08:44 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:32.131 04:08:44 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:32.131 04:08:44 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.131 04:08:44 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.131 04:08:44 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:32.131 04:08:44 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:32.131 04:08:44 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:32.131 04:08:44 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.131 04:08:44 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:32.131 04:08:44 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.131 04:08:44 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.131 04:08:44 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:32.131 04:08:44 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:32.131 04:08:44 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:32.131 04:08:44 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.131 04:08:44 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:32.131 04:08:44 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:32.131 04:08:44 -- common/autotest_common.sh@1588 -- # return 0 00:05:32.131 04:08:44 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:32.131 04:08:44 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:32.131 04:08:44 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:32.131 04:08:44 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:32.131 04:08:44 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:32.131 04:08:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.131 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.132 04:08:44 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.132 04:08:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.132 04:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.132 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.391 ************************************ 00:05:32.391 START TEST env 00:05:32.391 ************************************ 00:05:32.391 04:08:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.391 * Looking for test storage... 00:05:32.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:32.391 04:08:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:32.391 04:08:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:32.391 04:08:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:32.391 04:08:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:32.391 04:08:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:32.391 04:08:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:32.391 04:08:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:32.391 04:08:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:32.391 04:08:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.391 04:08:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:32.391 04:08:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:32.391 04:08:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:32.391 04:08:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:32.391 04:08:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:32.391 04:08:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:32.391 04:08:44 -- scripts/common.sh@344 -- # : 1 00:05:32.391 04:08:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:32.391 04:08:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.391 04:08:44 -- scripts/common.sh@364 -- # decimal 1 00:05:32.391 04:08:44 -- scripts/common.sh@352 -- # local d=1 00:05:32.391 04:08:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.391 04:08:44 -- scripts/common.sh@354 -- # echo 1 00:05:32.391 04:08:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:32.391 04:08:44 -- scripts/common.sh@365 -- # decimal 2 00:05:32.391 04:08:44 -- scripts/common.sh@352 -- # local d=2 00:05:32.391 04:08:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.391 04:08:44 -- scripts/common.sh@354 -- # echo 2 00:05:32.391 04:08:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:32.391 04:08:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:32.391 04:08:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:32.391 04:08:44 -- scripts/common.sh@367 -- # return 0 00:05:32.391 04:08:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.391 --rc genhtml_branch_coverage=1 00:05:32.391 --rc genhtml_function_coverage=1 00:05:32.391 --rc genhtml_legend=1 00:05:32.391 --rc geninfo_all_blocks=1 00:05:32.391 --rc geninfo_unexecuted_blocks=1 00:05:32.391 00:05:32.391 ' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.391 --rc genhtml_branch_coverage=1 00:05:32.391 --rc genhtml_function_coverage=1 00:05:32.391 --rc genhtml_legend=1 00:05:32.391 --rc geninfo_all_blocks=1 00:05:32.391 --rc geninfo_unexecuted_blocks=1 00:05:32.391 00:05:32.391 ' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.391 --rc genhtml_branch_coverage=1 00:05:32.391 --rc genhtml_function_coverage=1 00:05:32.391 --rc genhtml_legend=1 00:05:32.391 --rc geninfo_all_blocks=1 00:05:32.391 --rc geninfo_unexecuted_blocks=1 00:05:32.391 00:05:32.391 ' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:32.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.391 --rc genhtml_branch_coverage=1 00:05:32.391 --rc genhtml_function_coverage=1 00:05:32.391 --rc genhtml_legend=1 00:05:32.391 --rc geninfo_all_blocks=1 00:05:32.391 --rc geninfo_unexecuted_blocks=1 00:05:32.391 00:05:32.391 ' 00:05:32.391 04:08:44 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:32.391 04:08:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.391 04:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.391 04:08:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.391 ************************************ 00:05:32.391 START TEST env_memory 00:05:32.391 ************************************ 00:05:32.391 04:08:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:32.391 00:05:32.391 00:05:32.391 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.391 http://cunit.sourceforge.net/ 00:05:32.391 00:05:32.391 00:05:32.391 Suite: memory 00:05:32.391 Test: alloc and free memory map ...[2024-12-06 04:08:44.946869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:32.650 passed 00:05:32.650 Test: mem map translation ...[2024-12-06 04:08:44.985887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:32.650 [2024-12-06 04:08:44.985947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:32.650 [2024-12-06 04:08:44.986032] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:32.650 [2024-12-06 04:08:44.986059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:32.650 passed 00:05:32.650 Test: mem map registration ...[2024-12-06 04:08:45.050384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:32.650 [2024-12-06 04:08:45.050426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:32.650 passed 00:05:32.650 Test: mem map adjacent registrations ...passed 00:05:32.650 00:05:32.650 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.650 suites 1 1 n/a 0 0 00:05:32.650 tests 4 4 4 0 0 00:05:32.650 asserts 152 152 152 0 n/a 00:05:32.650 00:05:32.650 Elapsed time = 0.228 seconds 00:05:32.650 00:05:32.650 real 0m0.249s 00:05:32.650 user 0m0.228s 00:05:32.650 sys 0m0.013s 00:05:32.650 04:08:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.650 04:08:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.650 ************************************ 00:05:32.650 END TEST env_memory 00:05:32.650 ************************************ 00:05:32.650 04:08:45 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.650 04:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.650 04:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.650 04:08:45 -- common/autotest_common.sh@10 -- # set +x 00:05:32.650 ************************************ 00:05:32.650 START TEST env_vtophys 00:05:32.650 ************************************ 00:05:32.650 04:08:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:32.909 EAL: lib.eal log level changed from notice to debug 00:05:32.909 EAL: Detected lcore 0 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 1 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 2 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 3 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 4 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 5 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 6 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 7 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 8 as core 0 on socket 0 00:05:32.909 EAL: Detected lcore 9 as core 0 on socket 0 00:05:32.909 EAL: Maximum logical cores by configuration: 128 00:05:32.909 EAL: Detected CPU lcores: 10 00:05:32.909 EAL: Detected NUMA nodes: 1 00:05:32.909 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:32.909 EAL: Detected shared linkage of DPDK 00:05:32.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:32.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:32.909 EAL: Registered [vdev] bus. 00:05:32.909 EAL: bus.vdev log level changed from disabled to notice 00:05:32.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:32.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:32.909 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:32.909 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:32.909 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:32.910 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:32.910 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:32.910 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:32.910 EAL: No shared files mode enabled, IPC will be disabled 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Selected IOVA mode 'PA' 00:05:32.910 EAL: Probing VFIO support... 00:05:32.910 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.910 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:32.910 EAL: Ask a virtual area of 0x2e000 bytes 00:05:32.910 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:32.910 EAL: Setting up physically contiguous memory... 00:05:32.910 EAL: Setting maximum number of open files to 524288 00:05:32.910 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:32.910 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:32.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.910 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:32.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.910 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:32.910 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:32.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.910 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:32.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.910 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:32.910 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:32.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.910 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:32.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.910 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:32.910 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:32.910 EAL: Ask a virtual area of 0x61000 bytes 00:05:32.910 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:32.910 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:32.910 EAL: Ask a virtual area of 0x400000000 bytes 00:05:32.910 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:32.910 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:32.910 EAL: Hugepages will be freed exactly as allocated. 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: TSC frequency is ~2200000 KHz 00:05:32.910 EAL: Main lcore 0 is ready (tid=7f686785ba00;cpuset=[0]) 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 0 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 2MB 00:05:32.910 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.910 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.910 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:32.910 00:05:32.910 00:05:32.910 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.910 http://cunit.sourceforge.net/ 00:05:32.910 00:05:32.910 00:05:32.910 Suite: components_suite 00:05:32.910 Test: vtophys_malloc_test ...passed 00:05:32.910 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.910 EAL: Trying to obtain current memory policy. 00:05:32.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.910 EAL: Restoring previous memory policy: 4 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.910 EAL: request: mp_malloc_sync 00:05:32.910 EAL: No shared files mode enabled, IPC is disabled 00:05:32.910 EAL: Heap on socket 0 was expanded by 130MB 00:05:32.910 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.168 EAL: request: mp_malloc_sync 00:05:33.168 EAL: No shared files mode enabled, IPC is disabled 00:05:33.168 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.168 EAL: Trying to obtain current memory policy. 00:05:33.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.168 EAL: Restoring previous memory policy: 4 00:05:33.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.168 EAL: request: mp_malloc_sync 00:05:33.168 EAL: No shared files mode enabled, IPC is disabled 00:05:33.168 EAL: Heap on socket 0 was expanded by 258MB 00:05:33.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.168 EAL: request: mp_malloc_sync 00:05:33.168 EAL: No shared files mode enabled, IPC is disabled 00:05:33.168 EAL: Heap on socket 0 was shrunk by 258MB 00:05:33.168 EAL: Trying to obtain current memory policy. 00:05:33.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.426 EAL: Restoring previous memory policy: 4 00:05:33.426 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.426 EAL: request: mp_malloc_sync 00:05:33.426 EAL: No shared files mode enabled, IPC is disabled 00:05:33.426 EAL: Heap on socket 0 was expanded by 514MB 00:05:33.426 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.685 EAL: request: mp_malloc_sync 00:05:33.685 EAL: No shared files mode enabled, IPC is disabled 00:05:33.685 EAL: Heap on socket 0 was shrunk by 514MB 00:05:33.685 EAL: Trying to obtain current memory policy. 00:05:33.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.943 EAL: Restoring previous memory policy: 4 00:05:33.943 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.943 EAL: request: mp_malloc_sync 00:05:33.943 EAL: No shared files mode enabled, IPC is disabled 00:05:33.943 EAL: Heap on socket 0 was expanded by 1026MB 00:05:33.943 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.202 EAL: request: mp_malloc_sync 00:05:34.202 EAL: No shared files mode enabled, IPC is disabled 00:05:34.202 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:34.202 passed 00:05:34.202 00:05:34.202 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.202 suites 1 1 n/a 0 0 00:05:34.202 tests 2 2 2 0 0 00:05:34.202 asserts 5323 5323 5323 0 n/a 00:05:34.202 00:05:34.202 Elapsed time = 1.330 seconds 00:05:34.202 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.202 EAL: request: mp_malloc_sync 00:05:34.202 EAL: No shared files mode enabled, IPC is disabled 00:05:34.202 EAL: Heap on socket 0 was shrunk by 2MB 00:05:34.202 EAL: No shared files mode enabled, IPC is disabled 00:05:34.202 EAL: No shared files mode enabled, IPC is disabled 00:05:34.202 EAL: No shared files mode enabled, IPC is disabled 00:05:34.202 00:05:34.202 real 0m1.531s 00:05:34.202 user 0m0.845s 00:05:34.202 sys 0m0.556s 00:05:34.202 04:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.202 ************************************ 00:05:34.202 END TEST env_vtophys 00:05:34.202 ************************************ 00:05:34.202 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.460 04:08:46 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.460 04:08:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.460 04:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.460 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.460 ************************************ 00:05:34.460 START TEST env_pci 00:05:34.460 ************************************ 00:05:34.461 04:08:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:34.461 00:05:34.461 00:05:34.461 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.461 http://cunit.sourceforge.net/ 00:05:34.461 00:05:34.461 00:05:34.461 Suite: pci 00:05:34.461 Test: pci_hook ...[2024-12-06 04:08:46.787677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65718 has claimed it 00:05:34.461 passed 00:05:34.461 00:05:34.461 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.461 suites 1 1 n/a 0 0 00:05:34.461 tests 1 1 1 0 0 00:05:34.461 asserts 25 25 25 0 n/a 00:05:34.461 00:05:34.461 Elapsed time = 0.002 seconds 00:05:34.461 EAL: Cannot find device (10000:00:01.0) 00:05:34.461 EAL: Failed to attach device on primary process 00:05:34.461 00:05:34.461 real 0m0.018s 00:05:34.461 user 0m0.009s 00:05:34.461 sys 0m0.009s 00:05:34.461 ************************************ 00:05:34.461 END TEST env_pci 00:05:34.461 ************************************ 00:05:34.461 04:08:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.461 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.461 04:08:46 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:34.461 04:08:46 -- env/env.sh@15 -- # uname 00:05:34.461 04:08:46 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:34.461 04:08:46 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:34.461 04:08:46 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.461 04:08:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:34.461 04:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.461 04:08:46 -- common/autotest_common.sh@10 -- # set +x 00:05:34.461 ************************************ 00:05:34.461 START TEST env_dpdk_post_init 00:05:34.461 ************************************ 00:05:34.461 04:08:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:34.461 EAL: Detected CPU lcores: 10 00:05:34.461 EAL: Detected NUMA nodes: 1 00:05:34.461 EAL: Detected shared linkage of DPDK 00:05:34.461 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.461 EAL: Selected IOVA mode 'PA' 00:05:34.461 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.461 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:34.461 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:34.720 Starting DPDK initialization... 00:05:34.720 Starting SPDK post initialization... 00:05:34.720 SPDK NVMe probe 00:05:34.720 Attaching to 0000:00:06.0 00:05:34.720 Attaching to 0000:00:07.0 00:05:34.720 Attached to 0000:00:06.0 00:05:34.720 Attached to 0000:00:07.0 00:05:34.720 Cleaning up... 00:05:34.720 ************************************ 00:05:34.720 END TEST env_dpdk_post_init 00:05:34.720 ************************************ 00:05:34.720 00:05:34.720 real 0m0.177s 00:05:34.720 user 0m0.045s 00:05:34.720 sys 0m0.032s 00:05:34.720 04:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.720 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 04:08:47 -- env/env.sh@26 -- # uname 00:05:34.720 04:08:47 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.720 04:08:47 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.720 04:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.720 04:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.720 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.720 ************************************ 00:05:34.720 START TEST env_mem_callbacks 00:05:34.720 ************************************ 00:05:34.720 04:08:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.720 EAL: Detected CPU lcores: 10 00:05:34.720 EAL: Detected NUMA nodes: 1 00:05:34.720 EAL: Detected shared linkage of DPDK 00:05:34.720 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.720 EAL: Selected IOVA mode 'PA' 00:05:34.720 00:05:34.720 00:05:34.720 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.720 http://cunit.sourceforge.net/ 00:05:34.720 00:05:34.720 00:05:34.720 Suite: memory 00:05:34.720 Test: test ... 00:05:34.720 register 0x200000200000 2097152 00:05:34.720 malloc 3145728 00:05:34.720 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.720 register 0x200000400000 4194304 00:05:34.720 buf 0x200000500000 len 3145728 PASSED 00:05:34.720 malloc 64 00:05:34.720 buf 0x2000004fff40 len 64 PASSED 00:05:34.720 malloc 4194304 00:05:34.720 register 0x200000800000 6291456 00:05:34.720 buf 0x200000a00000 len 4194304 PASSED 00:05:34.720 free 0x200000500000 3145728 00:05:34.720 free 0x2000004fff40 64 00:05:34.720 unregister 0x200000400000 4194304 PASSED 00:05:34.720 free 0x200000a00000 4194304 00:05:34.720 unregister 0x200000800000 6291456 PASSED 00:05:34.720 malloc 8388608 00:05:34.720 register 0x200000400000 10485760 00:05:34.720 buf 0x200000600000 len 8388608 PASSED 00:05:34.720 free 0x200000600000 8388608 00:05:34.720 unregister 0x200000400000 10485760 PASSED 00:05:34.720 passed 00:05:34.720 00:05:34.720 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.720 suites 1 1 n/a 0 0 00:05:34.720 tests 1 1 1 0 0 00:05:34.720 asserts 15 15 15 0 n/a 00:05:34.720 00:05:34.720 Elapsed time = 0.008 seconds 00:05:34.720 00:05:34.720 real 0m0.140s 00:05:34.720 user 0m0.015s 00:05:34.720 sys 0m0.022s 00:05:34.720 04:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.721 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.721 ************************************ 00:05:34.721 END TEST env_mem_callbacks 00:05:34.721 ************************************ 00:05:34.721 00:05:34.721 real 0m2.582s 00:05:34.721 user 0m1.340s 00:05:34.721 sys 0m0.889s 00:05:34.721 04:08:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.721 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.721 ************************************ 00:05:34.721 END TEST env 00:05:34.721 ************************************ 00:05:34.980 04:08:47 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.980 04:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.980 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:34.980 ************************************ 00:05:34.980 START TEST rpc 00:05:34.980 ************************************ 00:05:34.980 04:08:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:34.980 * Looking for test storage... 00:05:34.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.980 04:08:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.980 04:08:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.980 04:08:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.980 04:08:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.980 04:08:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.980 04:08:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.980 04:08:47 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.980 04:08:47 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.980 04:08:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.980 04:08:47 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.980 04:08:47 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.980 04:08:47 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.980 04:08:47 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.980 04:08:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.980 04:08:47 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.980 04:08:47 -- scripts/common.sh@344 -- # : 1 00:05:34.980 04:08:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.980 04:08:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.980 04:08:47 -- scripts/common.sh@364 -- # decimal 1 00:05:34.980 04:08:47 -- scripts/common.sh@352 -- # local d=1 00:05:34.980 04:08:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.980 04:08:47 -- scripts/common.sh@354 -- # echo 1 00:05:34.980 04:08:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.980 04:08:47 -- scripts/common.sh@365 -- # decimal 2 00:05:34.980 04:08:47 -- scripts/common.sh@352 -- # local d=2 00:05:34.980 04:08:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.980 04:08:47 -- scripts/common.sh@354 -- # echo 2 00:05:34.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.980 04:08:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.980 04:08:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.980 04:08:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.980 04:08:47 -- scripts/common.sh@367 -- # return 0 00:05:34.980 04:08:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.980 --rc genhtml_branch_coverage=1 00:05:34.980 --rc genhtml_function_coverage=1 00:05:34.980 --rc genhtml_legend=1 00:05:34.980 --rc geninfo_all_blocks=1 00:05:34.980 --rc geninfo_unexecuted_blocks=1 00:05:34.980 00:05:34.980 ' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.980 --rc genhtml_branch_coverage=1 00:05:34.980 --rc genhtml_function_coverage=1 00:05:34.980 --rc genhtml_legend=1 00:05:34.980 --rc geninfo_all_blocks=1 00:05:34.980 --rc geninfo_unexecuted_blocks=1 00:05:34.980 00:05:34.980 ' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.980 --rc genhtml_branch_coverage=1 00:05:34.980 --rc genhtml_function_coverage=1 00:05:34.980 --rc genhtml_legend=1 00:05:34.980 --rc geninfo_all_blocks=1 00:05:34.980 --rc geninfo_unexecuted_blocks=1 00:05:34.980 00:05:34.980 ' 00:05:34.980 04:08:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.980 --rc genhtml_branch_coverage=1 00:05:34.980 --rc genhtml_function_coverage=1 00:05:34.980 --rc genhtml_legend=1 00:05:34.980 --rc geninfo_all_blocks=1 00:05:34.980 --rc geninfo_unexecuted_blocks=1 00:05:34.980 00:05:34.980 ' 00:05:34.980 04:08:47 -- rpc/rpc.sh@65 -- # spdk_pid=65835 00:05:34.980 04:08:47 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.980 04:08:47 -- rpc/rpc.sh@67 -- # waitforlisten 65835 00:05:34.980 04:08:47 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:34.980 04:08:47 -- common/autotest_common.sh@829 -- # '[' -z 65835 ']' 00:05:34.980 04:08:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.980 04:08:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.980 04:08:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.980 04:08:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.980 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.240 [2024-12-06 04:08:47.582440] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.240 [2024-12-06 04:08:47.582849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65835 ] 00:05:35.240 [2024-12-06 04:08:47.720132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.498 [2024-12-06 04:08:47.814701] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.498 [2024-12-06 04:08:47.815113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:35.498 [2024-12-06 04:08:47.815220] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65835' to capture a snapshot of events at runtime. 00:05:35.498 [2024-12-06 04:08:47.815345] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65835 for offline analysis/debug. 00:05:35.498 [2024-12-06 04:08:47.815591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.435 04:08:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.435 04:08:48 -- common/autotest_common.sh@862 -- # return 0 00:05:36.435 04:08:48 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.435 04:08:48 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:36.435 04:08:48 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:36.435 04:08:48 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:36.435 04:08:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.435 04:08:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 ************************************ 00:05:36.435 START TEST rpc_integrity 00:05:36.435 ************************************ 00:05:36.435 04:08:48 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:36.435 04:08:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.435 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.435 04:08:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.435 04:08:48 -- rpc/rpc.sh@13 -- # jq length 00:05:36.435 04:08:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.435 04:08:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.435 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.435 04:08:48 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:36.435 04:08:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.435 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.435 04:08:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.435 { 00:05:36.435 "name": "Malloc0", 00:05:36.435 "aliases": [ 00:05:36.435 "0830a7bc-1335-456a-a876-fa16fa01468c" 00:05:36.435 ], 00:05:36.435 "product_name": "Malloc disk", 00:05:36.435 "block_size": 512, 00:05:36.435 "num_blocks": 16384, 00:05:36.435 "uuid": "0830a7bc-1335-456a-a876-fa16fa01468c", 00:05:36.435 "assigned_rate_limits": { 00:05:36.435 "rw_ios_per_sec": 0, 00:05:36.435 "rw_mbytes_per_sec": 0, 00:05:36.435 "r_mbytes_per_sec": 0, 00:05:36.435 "w_mbytes_per_sec": 0 00:05:36.435 }, 00:05:36.435 "claimed": false, 00:05:36.435 "zoned": false, 00:05:36.435 "supported_io_types": { 00:05:36.435 "read": true, 00:05:36.435 "write": true, 00:05:36.435 "unmap": true, 00:05:36.435 "write_zeroes": true, 00:05:36.435 "flush": true, 00:05:36.435 "reset": true, 00:05:36.435 "compare": false, 00:05:36.435 "compare_and_write": false, 00:05:36.435 "abort": true, 00:05:36.435 "nvme_admin": false, 00:05:36.435 "nvme_io": false 00:05:36.435 }, 00:05:36.435 "memory_domains": [ 00:05:36.435 { 00:05:36.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.435 "dma_device_type": 2 00:05:36.435 } 00:05:36.435 ], 00:05:36.435 "driver_specific": {} 00:05:36.435 } 00:05:36.435 ]' 00:05:36.435 04:08:48 -- rpc/rpc.sh@17 -- # jq length 00:05:36.435 04:08:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.435 04:08:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:36.435 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 [2024-12-06 04:08:48.866341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:36.435 [2024-12-06 04:08:48.866410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.435 [2024-12-06 04:08:48.866430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2b030 00:05:36.435 [2024-12-06 04:08:48.866440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.435 [2024-12-06 04:08:48.868142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.435 [2024-12-06 04:08:48.868195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.435 Passthru0 00:05:36.435 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.435 04:08:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.435 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.435 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.435 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.435 04:08:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.435 { 00:05:36.435 "name": "Malloc0", 00:05:36.435 "aliases": [ 00:05:36.435 "0830a7bc-1335-456a-a876-fa16fa01468c" 00:05:36.435 ], 00:05:36.435 "product_name": "Malloc disk", 00:05:36.435 "block_size": 512, 00:05:36.435 "num_blocks": 16384, 00:05:36.435 "uuid": "0830a7bc-1335-456a-a876-fa16fa01468c", 00:05:36.435 "assigned_rate_limits": { 00:05:36.435 "rw_ios_per_sec": 0, 00:05:36.435 "rw_mbytes_per_sec": 0, 00:05:36.435 "r_mbytes_per_sec": 0, 00:05:36.435 "w_mbytes_per_sec": 0 00:05:36.435 }, 00:05:36.435 "claimed": true, 00:05:36.435 "claim_type": "exclusive_write", 00:05:36.435 "zoned": false, 00:05:36.436 "supported_io_types": { 00:05:36.436 "read": true, 00:05:36.436 "write": true, 00:05:36.436 "unmap": true, 00:05:36.436 "write_zeroes": true, 00:05:36.436 "flush": true, 00:05:36.436 "reset": true, 00:05:36.436 "compare": false, 00:05:36.436 "compare_and_write": false, 00:05:36.436 "abort": true, 00:05:36.436 "nvme_admin": false, 00:05:36.436 "nvme_io": false 00:05:36.436 }, 00:05:36.436 "memory_domains": [ 00:05:36.436 { 00:05:36.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.436 "dma_device_type": 2 00:05:36.436 } 00:05:36.436 ], 00:05:36.436 "driver_specific": {} 00:05:36.436 }, 00:05:36.436 { 00:05:36.436 "name": "Passthru0", 00:05:36.436 "aliases": [ 00:05:36.436 "2b7ff128-7d4e-5c17-9f32-d072ee3449bb" 00:05:36.436 ], 00:05:36.436 "product_name": "passthru", 00:05:36.436 "block_size": 512, 00:05:36.436 "num_blocks": 16384, 00:05:36.436 "uuid": "2b7ff128-7d4e-5c17-9f32-d072ee3449bb", 00:05:36.436 "assigned_rate_limits": { 00:05:36.436 "rw_ios_per_sec": 0, 00:05:36.436 "rw_mbytes_per_sec": 0, 00:05:36.436 "r_mbytes_per_sec": 0, 00:05:36.436 "w_mbytes_per_sec": 0 00:05:36.436 }, 00:05:36.436 "claimed": false, 00:05:36.436 "zoned": false, 00:05:36.436 "supported_io_types": { 00:05:36.436 "read": true, 00:05:36.436 "write": true, 00:05:36.436 "unmap": true, 00:05:36.436 "write_zeroes": true, 00:05:36.436 "flush": true, 00:05:36.436 "reset": true, 00:05:36.436 "compare": false, 00:05:36.436 "compare_and_write": false, 00:05:36.436 "abort": true, 00:05:36.436 "nvme_admin": false, 00:05:36.436 "nvme_io": false 00:05:36.436 }, 00:05:36.436 "memory_domains": [ 00:05:36.436 { 00:05:36.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.436 "dma_device_type": 2 00:05:36.436 } 00:05:36.436 ], 00:05:36.436 "driver_specific": { 00:05:36.436 "passthru": { 00:05:36.436 "name": "Passthru0", 00:05:36.436 "base_bdev_name": "Malloc0" 00:05:36.436 } 00:05:36.436 } 00:05:36.436 } 00:05:36.436 ]' 00:05:36.436 04:08:48 -- rpc/rpc.sh@21 -- # jq length 00:05:36.436 04:08:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.436 04:08:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.436 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.436 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.436 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.436 04:08:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:36.436 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.436 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.436 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.436 04:08:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.436 04:08:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.436 04:08:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.436 04:08:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.436 04:08:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.436 04:08:48 -- rpc/rpc.sh@26 -- # jq length 00:05:36.695 04:08:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.696 00:05:36.696 real 0m0.329s 00:05:36.696 user 0m0.226s 00:05:36.696 sys 0m0.034s 00:05:36.696 04:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.696 ************************************ 00:05:36.696 END TEST rpc_integrity 00:05:36.696 ************************************ 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 04:08:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:36.696 04:08:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.696 04:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 ************************************ 00:05:36.696 START TEST rpc_plugins 00:05:36.696 ************************************ 00:05:36.696 04:08:49 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:36.696 04:08:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:36.696 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.696 04:08:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:36.696 04:08:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:36.696 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.696 04:08:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:36.696 { 00:05:36.696 "name": "Malloc1", 00:05:36.696 "aliases": [ 00:05:36.696 "38f6c40a-ec41-4902-bc62-eb12033a8896" 00:05:36.696 ], 00:05:36.696 "product_name": "Malloc disk", 00:05:36.696 "block_size": 4096, 00:05:36.696 "num_blocks": 256, 00:05:36.696 "uuid": "38f6c40a-ec41-4902-bc62-eb12033a8896", 00:05:36.696 "assigned_rate_limits": { 00:05:36.696 "rw_ios_per_sec": 0, 00:05:36.696 "rw_mbytes_per_sec": 0, 00:05:36.696 "r_mbytes_per_sec": 0, 00:05:36.696 "w_mbytes_per_sec": 0 00:05:36.696 }, 00:05:36.696 "claimed": false, 00:05:36.696 "zoned": false, 00:05:36.696 "supported_io_types": { 00:05:36.696 "read": true, 00:05:36.696 "write": true, 00:05:36.696 "unmap": true, 00:05:36.696 "write_zeroes": true, 00:05:36.696 "flush": true, 00:05:36.696 "reset": true, 00:05:36.696 "compare": false, 00:05:36.696 "compare_and_write": false, 00:05:36.696 "abort": true, 00:05:36.696 "nvme_admin": false, 00:05:36.696 "nvme_io": false 00:05:36.696 }, 00:05:36.696 "memory_domains": [ 00:05:36.696 { 00:05:36.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.696 "dma_device_type": 2 00:05:36.696 } 00:05:36.696 ], 00:05:36.696 "driver_specific": {} 00:05:36.696 } 00:05:36.696 ]' 00:05:36.696 04:08:49 -- rpc/rpc.sh@32 -- # jq length 00:05:36.696 04:08:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:36.696 04:08:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:36.696 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.696 04:08:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:36.696 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.696 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.696 04:08:49 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:36.696 04:08:49 -- rpc/rpc.sh@36 -- # jq length 00:05:36.696 04:08:49 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:36.696 00:05:36.696 real 0m0.160s 00:05:36.696 user 0m0.104s 00:05:36.696 sys 0m0.020s 00:05:36.696 ************************************ 00:05:36.696 END TEST rpc_plugins 00:05:36.696 ************************************ 00:05:36.696 04:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.696 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.955 04:08:49 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:36.955 04:08:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.955 04:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.955 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.955 ************************************ 00:05:36.955 START TEST rpc_trace_cmd_test 00:05:36.955 ************************************ 00:05:36.955 04:08:49 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:36.955 04:08:49 -- rpc/rpc.sh@40 -- # local info 00:05:36.955 04:08:49 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:36.955 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.955 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:36.955 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.955 04:08:49 -- rpc/rpc.sh@42 -- # info='{ 00:05:36.955 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65835", 00:05:36.955 "tpoint_group_mask": "0x8", 00:05:36.955 "iscsi_conn": { 00:05:36.955 "mask": "0x2", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "scsi": { 00:05:36.955 "mask": "0x4", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "bdev": { 00:05:36.955 "mask": "0x8", 00:05:36.955 "tpoint_mask": "0xffffffffffffffff" 00:05:36.955 }, 00:05:36.955 "nvmf_rdma": { 00:05:36.955 "mask": "0x10", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "nvmf_tcp": { 00:05:36.955 "mask": "0x20", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "ftl": { 00:05:36.955 "mask": "0x40", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "blobfs": { 00:05:36.955 "mask": "0x80", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.955 "dsa": { 00:05:36.955 "mask": "0x200", 00:05:36.955 "tpoint_mask": "0x0" 00:05:36.955 }, 00:05:36.956 "thread": { 00:05:36.956 "mask": "0x400", 00:05:36.956 "tpoint_mask": "0x0" 00:05:36.956 }, 00:05:36.956 "nvme_pcie": { 00:05:36.956 "mask": "0x800", 00:05:36.956 "tpoint_mask": "0x0" 00:05:36.956 }, 00:05:36.956 "iaa": { 00:05:36.956 "mask": "0x1000", 00:05:36.956 "tpoint_mask": "0x0" 00:05:36.956 }, 00:05:36.956 "nvme_tcp": { 00:05:36.956 "mask": "0x2000", 00:05:36.956 "tpoint_mask": "0x0" 00:05:36.956 }, 00:05:36.956 "bdev_nvme": { 00:05:36.956 "mask": "0x4000", 00:05:36.956 "tpoint_mask": "0x0" 00:05:36.956 } 00:05:36.956 }' 00:05:36.956 04:08:49 -- rpc/rpc.sh@43 -- # jq length 00:05:36.956 04:08:49 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:36.956 04:08:49 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:36.956 04:08:49 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:36.956 04:08:49 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.956 04:08:49 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.956 04:08:49 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.956 04:08:49 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.956 04:08:49 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:37.215 04:08:49 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:37.215 00:05:37.215 real 0m0.257s 00:05:37.215 user 0m0.214s 00:05:37.215 sys 0m0.034s 00:05:37.215 ************************************ 00:05:37.215 END TEST rpc_trace_cmd_test 00:05:37.215 ************************************ 00:05:37.215 04:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 04:08:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:37.215 04:08:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:37.215 04:08:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:37.215 04:08:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.215 04:08:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 ************************************ 00:05:37.215 START TEST rpc_daemon_integrity 00:05:37.215 ************************************ 00:05:37.215 04:08:49 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:37.215 04:08:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.215 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.215 04:08:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.215 04:08:49 -- rpc/rpc.sh@13 -- # jq length 00:05:37.215 04:08:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.215 04:08:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.215 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.215 04:08:49 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:37.215 04:08:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.215 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.215 04:08:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.215 { 00:05:37.215 "name": "Malloc2", 00:05:37.215 "aliases": [ 00:05:37.215 "936427f7-9be2-4a0b-b4b4-e455b01ee373" 00:05:37.215 ], 00:05:37.215 "product_name": "Malloc disk", 00:05:37.215 "block_size": 512, 00:05:37.215 "num_blocks": 16384, 00:05:37.215 "uuid": "936427f7-9be2-4a0b-b4b4-e455b01ee373", 00:05:37.215 "assigned_rate_limits": { 00:05:37.215 "rw_ios_per_sec": 0, 00:05:37.215 "rw_mbytes_per_sec": 0, 00:05:37.215 "r_mbytes_per_sec": 0, 00:05:37.215 "w_mbytes_per_sec": 0 00:05:37.215 }, 00:05:37.215 "claimed": false, 00:05:37.215 "zoned": false, 00:05:37.215 "supported_io_types": { 00:05:37.215 "read": true, 00:05:37.215 "write": true, 00:05:37.215 "unmap": true, 00:05:37.215 "write_zeroes": true, 00:05:37.215 "flush": true, 00:05:37.215 "reset": true, 00:05:37.215 "compare": false, 00:05:37.215 "compare_and_write": false, 00:05:37.215 "abort": true, 00:05:37.215 "nvme_admin": false, 00:05:37.215 "nvme_io": false 00:05:37.215 }, 00:05:37.215 "memory_domains": [ 00:05:37.215 { 00:05:37.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.215 "dma_device_type": 2 00:05:37.215 } 00:05:37.215 ], 00:05:37.215 "driver_specific": {} 00:05:37.215 } 00:05:37.215 ]' 00:05:37.215 04:08:49 -- rpc/rpc.sh@17 -- # jq length 00:05:37.215 04:08:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.215 04:08:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:37.215 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.215 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.215 [2024-12-06 04:08:49.775184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:37.215 [2024-12-06 04:08:49.775255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.215 [2024-12-06 04:08:49.775274] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b2b9d0 00:05:37.215 [2024-12-06 04:08:49.775284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.215 [2024-12-06 04:08:49.776870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.215 [2024-12-06 04:08:49.776903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.480 Passthru0 00:05:37.480 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.480 04:08:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.480 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.480 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.480 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.480 04:08:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.480 { 00:05:37.480 "name": "Malloc2", 00:05:37.480 "aliases": [ 00:05:37.480 "936427f7-9be2-4a0b-b4b4-e455b01ee373" 00:05:37.480 ], 00:05:37.480 "product_name": "Malloc disk", 00:05:37.480 "block_size": 512, 00:05:37.480 "num_blocks": 16384, 00:05:37.480 "uuid": "936427f7-9be2-4a0b-b4b4-e455b01ee373", 00:05:37.480 "assigned_rate_limits": { 00:05:37.480 "rw_ios_per_sec": 0, 00:05:37.480 "rw_mbytes_per_sec": 0, 00:05:37.480 "r_mbytes_per_sec": 0, 00:05:37.480 "w_mbytes_per_sec": 0 00:05:37.480 }, 00:05:37.480 "claimed": true, 00:05:37.480 "claim_type": "exclusive_write", 00:05:37.480 "zoned": false, 00:05:37.480 "supported_io_types": { 00:05:37.480 "read": true, 00:05:37.480 "write": true, 00:05:37.480 "unmap": true, 00:05:37.480 "write_zeroes": true, 00:05:37.480 "flush": true, 00:05:37.480 "reset": true, 00:05:37.480 "compare": false, 00:05:37.480 "compare_and_write": false, 00:05:37.480 "abort": true, 00:05:37.480 "nvme_admin": false, 00:05:37.480 "nvme_io": false 00:05:37.480 }, 00:05:37.480 "memory_domains": [ 00:05:37.480 { 00:05:37.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.480 "dma_device_type": 2 00:05:37.480 } 00:05:37.480 ], 00:05:37.480 "driver_specific": {} 00:05:37.480 }, 00:05:37.480 { 00:05:37.480 "name": "Passthru0", 00:05:37.480 "aliases": [ 00:05:37.480 "0938f296-90ed-514e-b50a-119d0275474f" 00:05:37.480 ], 00:05:37.480 "product_name": "passthru", 00:05:37.480 "block_size": 512, 00:05:37.480 "num_blocks": 16384, 00:05:37.480 "uuid": "0938f296-90ed-514e-b50a-119d0275474f", 00:05:37.480 "assigned_rate_limits": { 00:05:37.480 "rw_ios_per_sec": 0, 00:05:37.480 "rw_mbytes_per_sec": 0, 00:05:37.480 "r_mbytes_per_sec": 0, 00:05:37.480 "w_mbytes_per_sec": 0 00:05:37.480 }, 00:05:37.480 "claimed": false, 00:05:37.480 "zoned": false, 00:05:37.480 "supported_io_types": { 00:05:37.480 "read": true, 00:05:37.480 "write": true, 00:05:37.480 "unmap": true, 00:05:37.480 "write_zeroes": true, 00:05:37.480 "flush": true, 00:05:37.480 "reset": true, 00:05:37.480 "compare": false, 00:05:37.480 "compare_and_write": false, 00:05:37.480 "abort": true, 00:05:37.480 "nvme_admin": false, 00:05:37.480 "nvme_io": false 00:05:37.480 }, 00:05:37.480 "memory_domains": [ 00:05:37.480 { 00:05:37.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.480 "dma_device_type": 2 00:05:37.480 } 00:05:37.480 ], 00:05:37.480 "driver_specific": { 00:05:37.480 "passthru": { 00:05:37.481 "name": "Passthru0", 00:05:37.481 "base_bdev_name": "Malloc2" 00:05:37.481 } 00:05:37.481 } 00:05:37.481 } 00:05:37.481 ]' 00:05:37.481 04:08:49 -- rpc/rpc.sh@21 -- # jq length 00:05:37.481 04:08:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.481 04:08:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.481 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.481 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.481 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.481 04:08:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:37.481 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.481 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.481 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.481 04:08:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.481 04:08:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.481 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.481 04:08:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.481 04:08:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.481 04:08:49 -- rpc/rpc.sh@26 -- # jq length 00:05:37.481 04:08:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.481 00:05:37.481 real 0m0.329s 00:05:37.481 user 0m0.223s 00:05:37.481 sys 0m0.037s 00:05:37.481 ************************************ 00:05:37.481 END TEST rpc_daemon_integrity 00:05:37.481 ************************************ 00:05:37.481 04:08:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.481 04:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:37.481 04:08:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:37.481 04:08:49 -- rpc/rpc.sh@84 -- # killprocess 65835 00:05:37.481 04:08:49 -- common/autotest_common.sh@936 -- # '[' -z 65835 ']' 00:05:37.481 04:08:49 -- common/autotest_common.sh@940 -- # kill -0 65835 00:05:37.481 04:08:49 -- common/autotest_common.sh@941 -- # uname 00:05:37.481 04:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:37.481 04:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65835 00:05:37.481 04:08:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:37.481 killing process with pid 65835 00:05:37.481 04:08:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:37.481 04:08:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65835' 00:05:37.481 04:08:50 -- common/autotest_common.sh@955 -- # kill 65835 00:05:37.481 04:08:50 -- common/autotest_common.sh@960 -- # wait 65835 00:05:38.062 00:05:38.062 real 0m3.095s 00:05:38.062 user 0m4.002s 00:05:38.062 sys 0m0.755s 00:05:38.062 04:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.062 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.062 ************************************ 00:05:38.062 END TEST rpc 00:05:38.062 ************************************ 00:05:38.062 04:08:50 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.062 04:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.062 04:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.062 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.062 ************************************ 00:05:38.062 START TEST rpc_client 00:05:38.062 ************************************ 00:05:38.062 04:08:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.062 * Looking for test storage... 00:05:38.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:38.062 04:08:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.062 04:08:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.062 04:08:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.321 04:08:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.321 04:08:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.321 04:08:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.321 04:08:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.321 04:08:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.321 04:08:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.321 04:08:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.321 04:08:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.321 04:08:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.321 04:08:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.321 04:08:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.321 04:08:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.321 04:08:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.321 04:08:50 -- scripts/common.sh@344 -- # : 1 00:05:38.321 04:08:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.321 04:08:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.321 04:08:50 -- scripts/common.sh@364 -- # decimal 1 00:05:38.321 04:08:50 -- scripts/common.sh@352 -- # local d=1 00:05:38.321 04:08:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.321 04:08:50 -- scripts/common.sh@354 -- # echo 1 00:05:38.321 04:08:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.321 04:08:50 -- scripts/common.sh@365 -- # decimal 2 00:05:38.321 04:08:50 -- scripts/common.sh@352 -- # local d=2 00:05:38.321 04:08:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.321 04:08:50 -- scripts/common.sh@354 -- # echo 2 00:05:38.321 04:08:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.321 04:08:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.321 04:08:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.321 04:08:50 -- scripts/common.sh@367 -- # return 0 00:05:38.321 04:08:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.321 04:08:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.321 --rc genhtml_branch_coverage=1 00:05:38.321 --rc genhtml_function_coverage=1 00:05:38.321 --rc genhtml_legend=1 00:05:38.321 --rc geninfo_all_blocks=1 00:05:38.321 --rc geninfo_unexecuted_blocks=1 00:05:38.321 00:05:38.321 ' 00:05:38.321 04:08:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.321 --rc genhtml_branch_coverage=1 00:05:38.321 --rc genhtml_function_coverage=1 00:05:38.321 --rc genhtml_legend=1 00:05:38.321 --rc geninfo_all_blocks=1 00:05:38.321 --rc geninfo_unexecuted_blocks=1 00:05:38.321 00:05:38.321 ' 00:05:38.321 04:08:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.321 --rc genhtml_branch_coverage=1 00:05:38.321 --rc genhtml_function_coverage=1 00:05:38.321 --rc genhtml_legend=1 00:05:38.321 --rc geninfo_all_blocks=1 00:05:38.321 --rc geninfo_unexecuted_blocks=1 00:05:38.321 00:05:38.321 ' 00:05:38.321 04:08:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.321 --rc genhtml_branch_coverage=1 00:05:38.321 --rc genhtml_function_coverage=1 00:05:38.321 --rc genhtml_legend=1 00:05:38.321 --rc geninfo_all_blocks=1 00:05:38.321 --rc geninfo_unexecuted_blocks=1 00:05:38.321 00:05:38.321 ' 00:05:38.321 04:08:50 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:38.321 OK 00:05:38.321 04:08:50 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:38.321 00:05:38.322 real 0m0.207s 00:05:38.322 user 0m0.134s 00:05:38.322 sys 0m0.086s 00:05:38.322 04:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.322 ************************************ 00:05:38.322 END TEST rpc_client 00:05:38.322 ************************************ 00:05:38.322 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.322 04:08:50 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.322 04:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.322 04:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.322 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.322 ************************************ 00:05:38.322 START TEST json_config 00:05:38.322 ************************************ 00:05:38.322 04:08:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.322 04:08:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.322 04:08:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.322 04:08:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.581 04:08:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.581 04:08:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.581 04:08:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.581 04:08:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.581 04:08:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.581 04:08:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.581 04:08:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.581 04:08:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.581 04:08:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.581 04:08:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.581 04:08:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.581 04:08:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.581 04:08:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.581 04:08:50 -- scripts/common.sh@344 -- # : 1 00:05:38.581 04:08:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.581 04:08:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.581 04:08:50 -- scripts/common.sh@364 -- # decimal 1 00:05:38.581 04:08:50 -- scripts/common.sh@352 -- # local d=1 00:05:38.581 04:08:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.581 04:08:50 -- scripts/common.sh@354 -- # echo 1 00:05:38.581 04:08:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.581 04:08:50 -- scripts/common.sh@365 -- # decimal 2 00:05:38.581 04:08:50 -- scripts/common.sh@352 -- # local d=2 00:05:38.581 04:08:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.581 04:08:50 -- scripts/common.sh@354 -- # echo 2 00:05:38.581 04:08:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.581 04:08:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.581 04:08:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.581 04:08:50 -- scripts/common.sh@367 -- # return 0 00:05:38.581 04:08:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.581 04:08:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.581 --rc genhtml_branch_coverage=1 00:05:38.581 --rc genhtml_function_coverage=1 00:05:38.581 --rc genhtml_legend=1 00:05:38.581 --rc geninfo_all_blocks=1 00:05:38.581 --rc geninfo_unexecuted_blocks=1 00:05:38.581 00:05:38.581 ' 00:05:38.581 04:08:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.581 --rc genhtml_branch_coverage=1 00:05:38.581 --rc genhtml_function_coverage=1 00:05:38.581 --rc genhtml_legend=1 00:05:38.581 --rc geninfo_all_blocks=1 00:05:38.582 --rc geninfo_unexecuted_blocks=1 00:05:38.582 00:05:38.582 ' 00:05:38.582 04:08:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.582 --rc genhtml_branch_coverage=1 00:05:38.582 --rc genhtml_function_coverage=1 00:05:38.582 --rc genhtml_legend=1 00:05:38.582 --rc geninfo_all_blocks=1 00:05:38.582 --rc geninfo_unexecuted_blocks=1 00:05:38.582 00:05:38.582 ' 00:05:38.582 04:08:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.582 --rc genhtml_branch_coverage=1 00:05:38.582 --rc genhtml_function_coverage=1 00:05:38.582 --rc genhtml_legend=1 00:05:38.582 --rc geninfo_all_blocks=1 00:05:38.582 --rc geninfo_unexecuted_blocks=1 00:05:38.582 00:05:38.582 ' 00:05:38.582 04:08:50 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.582 04:08:50 -- nvmf/common.sh@7 -- # uname -s 00:05:38.582 04:08:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.582 04:08:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.582 04:08:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.582 04:08:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.582 04:08:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.582 04:08:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.582 04:08:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.582 04:08:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.582 04:08:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.582 04:08:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.582 04:08:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:05:38.582 04:08:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:05:38.582 04:08:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.582 04:08:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.582 04:08:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.582 04:08:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.582 04:08:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.582 04:08:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.582 04:08:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.582 04:08:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.582 04:08:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.582 04:08:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.582 04:08:50 -- paths/export.sh@5 -- # export PATH 00:05:38.582 04:08:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.582 04:08:50 -- nvmf/common.sh@46 -- # : 0 00:05:38.582 04:08:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:38.582 04:08:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:38.582 04:08:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:38.582 04:08:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.582 04:08:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.582 04:08:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:38.582 04:08:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:38.582 04:08:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:38.582 04:08:50 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:38.582 04:08:50 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:38.582 04:08:50 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:38.582 04:08:50 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:38.582 04:08:50 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:38.582 04:08:50 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:38.582 04:08:50 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:38.582 04:08:50 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:38.582 04:08:50 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:38.582 04:08:50 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:38.582 04:08:50 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.582 INFO: JSON configuration test init 00:05:38.582 04:08:50 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:38.582 04:08:50 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:38.582 04:08:50 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:38.582 04:08:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.582 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.582 04:08:50 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:38.582 04:08:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.582 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.582 04:08:50 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:38.582 04:08:50 -- json_config/json_config.sh@98 -- # local app=target 00:05:38.582 04:08:50 -- json_config/json_config.sh@99 -- # shift 00:05:38.582 04:08:50 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:38.582 04:08:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:38.582 04:08:50 -- json_config/json_config.sh@111 -- # app_pid[$app]=66093 00:05:38.582 Waiting for target to run... 00:05:38.582 04:08:50 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:38.582 04:08:50 -- json_config/json_config.sh@114 -- # waitforlisten 66093 /var/tmp/spdk_tgt.sock 00:05:38.582 04:08:50 -- common/autotest_common.sh@829 -- # '[' -z 66093 ']' 00:05:38.582 04:08:50 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:38.582 04:08:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.582 04:08:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.582 04:08:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.582 04:08:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.582 04:08:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.582 [2024-12-06 04:08:51.000023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.582 [2024-12-06 04:08:51.000135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66093 ] 00:05:39.151 [2024-12-06 04:08:51.447808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.151 [2024-12-06 04:08:51.512267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.151 [2024-12-06 04:08:51.512454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.719 04:08:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.719 00:05:39.719 04:08:52 -- common/autotest_common.sh@862 -- # return 0 00:05:39.719 04:08:52 -- json_config/json_config.sh@115 -- # echo '' 00:05:39.719 04:08:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:39.719 04:08:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:39.719 04:08:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.719 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.719 04:08:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:39.719 04:08:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:39.719 04:08:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.719 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:39.719 04:08:52 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:39.719 04:08:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:39.719 04:08:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.287 04:08:52 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:40.287 04:08:52 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:40.287 04:08:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.287 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.287 04:08:52 -- json_config/json_config.sh@48 -- # local ret=0 00:05:40.287 04:08:52 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:40.287 04:08:52 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:40.287 04:08:52 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:40.287 04:08:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:40.287 04:08:52 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:40.287 04:08:52 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:40.287 04:08:52 -- json_config/json_config.sh@51 -- # local get_types 00:05:40.287 04:08:52 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:40.287 04:08:52 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:40.287 04:08:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.287 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.545 04:08:52 -- json_config/json_config.sh@58 -- # return 0 00:05:40.545 04:08:52 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:40.545 04:08:52 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:40.545 04:08:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.545 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.545 04:08:52 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:40.545 04:08:52 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:40.545 04:08:52 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.545 04:08:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.804 MallocForNvmf0 00:05:40.804 04:08:53 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:40.805 04:08:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.064 MallocForNvmf1 00:05:41.064 04:08:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.064 04:08:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.323 [2024-12-06 04:08:53.669114] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.323 04:08:53 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.323 04:08:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.582 04:08:53 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.582 04:08:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.841 04:08:54 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.841 04:08:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.841 04:08:54 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:41.841 04:08:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:42.104 [2024-12-06 04:08:54.617716] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:42.104 04:08:54 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:42.104 04:08:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.104 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.365 04:08:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:42.365 04:08:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.365 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.365 04:08:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:42.365 04:08:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:42.365 04:08:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:42.623 MallocBdevForConfigChangeCheck 00:05:42.623 04:08:54 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:42.623 04:08:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.623 04:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.623 04:08:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:42.623 04:08:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.882 INFO: shutting down applications... 00:05:42.882 04:08:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:42.882 04:08:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:42.882 04:08:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:42.882 04:08:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:42.882 04:08:55 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:43.449 Calling clear_iscsi_subsystem 00:05:43.449 Calling clear_nvmf_subsystem 00:05:43.449 Calling clear_nbd_subsystem 00:05:43.449 Calling clear_ublk_subsystem 00:05:43.449 Calling clear_vhost_blk_subsystem 00:05:43.449 Calling clear_vhost_scsi_subsystem 00:05:43.449 Calling clear_scheduler_subsystem 00:05:43.449 Calling clear_bdev_subsystem 00:05:43.449 Calling clear_accel_subsystem 00:05:43.449 Calling clear_vmd_subsystem 00:05:43.449 Calling clear_sock_subsystem 00:05:43.449 Calling clear_iobuf_subsystem 00:05:43.449 04:08:55 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:43.449 04:08:55 -- json_config/json_config.sh@396 -- # count=100 00:05:43.449 04:08:55 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:43.449 04:08:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.449 04:08:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:43.449 04:08:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:43.708 04:08:56 -- json_config/json_config.sh@398 -- # break 00:05:43.708 04:08:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:43.708 04:08:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:43.708 04:08:56 -- json_config/json_config.sh@120 -- # local app=target 00:05:43.708 04:08:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:43.708 04:08:56 -- json_config/json_config.sh@124 -- # [[ -n 66093 ]] 00:05:43.708 04:08:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 66093 00:05:43.708 04:08:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:43.708 04:08:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.708 04:08:56 -- json_config/json_config.sh@130 -- # kill -0 66093 00:05:43.708 04:08:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:44.284 04:08:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:44.284 04:08:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:44.284 04:08:56 -- json_config/json_config.sh@130 -- # kill -0 66093 00:05:44.284 04:08:56 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:44.284 04:08:56 -- json_config/json_config.sh@132 -- # break 00:05:44.285 04:08:56 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:44.285 04:08:56 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:44.285 SPDK target shutdown done 00:05:44.285 04:08:56 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:44.285 INFO: relaunching applications... 00:05:44.285 04:08:56 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.285 04:08:56 -- json_config/json_config.sh@98 -- # local app=target 00:05:44.285 Waiting for target to run... 00:05:44.285 04:08:56 -- json_config/json_config.sh@99 -- # shift 00:05:44.285 04:08:56 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:44.285 04:08:56 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:44.285 04:08:56 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:44.285 04:08:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.285 04:08:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.285 04:08:56 -- json_config/json_config.sh@111 -- # app_pid[$app]=66284 00:05:44.285 04:08:56 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:44.285 04:08:56 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:44.285 04:08:56 -- json_config/json_config.sh@114 -- # waitforlisten 66284 /var/tmp/spdk_tgt.sock 00:05:44.285 04:08:56 -- common/autotest_common.sh@829 -- # '[' -z 66284 ']' 00:05:44.285 04:08:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.285 04:08:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.285 04:08:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.285 04:08:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.285 04:08:56 -- common/autotest_common.sh@10 -- # set +x 00:05:44.285 [2024-12-06 04:08:56.744739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.285 [2024-12-06 04:08:56.745091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66284 ] 00:05:44.854 [2024-12-06 04:08:57.164474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.854 [2024-12-06 04:08:57.228862] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.854 [2024-12-06 04:08:57.229074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.113 [2024-12-06 04:08:57.537654] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.113 [2024-12-06 04:08:57.569714] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.372 00:05:45.372 INFO: Checking if target configuration is the same... 00:05:45.372 04:08:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.372 04:08:57 -- common/autotest_common.sh@862 -- # return 0 00:05:45.372 04:08:57 -- json_config/json_config.sh@115 -- # echo '' 00:05:45.372 04:08:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:45.372 04:08:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.372 04:08:57 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:45.372 04:08:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:45.372 04:08:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.372 + '[' 2 -ne 2 ']' 00:05:45.373 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:45.373 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:45.373 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:45.373 +++ basename /dev/fd/62 00:05:45.373 ++ mktemp /tmp/62.XXX 00:05:45.373 + tmp_file_1=/tmp/62.6gS 00:05:45.373 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:45.373 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.373 + tmp_file_2=/tmp/spdk_tgt_config.json.f7M 00:05:45.373 + ret=0 00:05:45.373 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.632 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:45.632 + diff -u /tmp/62.6gS /tmp/spdk_tgt_config.json.f7M 00:05:45.632 INFO: JSON config files are the same 00:05:45.632 + echo 'INFO: JSON config files are the same' 00:05:45.632 + rm /tmp/62.6gS /tmp/spdk_tgt_config.json.f7M 00:05:45.891 + exit 0 00:05:45.891 INFO: changing configuration and checking if this can be detected... 00:05:45.891 04:08:58 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:45.891 04:08:58 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:45.891 04:08:58 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:45.891 04:08:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.150 04:08:58 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.150 04:08:58 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:46.150 04:08:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.150 + '[' 2 -ne 2 ']' 00:05:46.150 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:46.150 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:46.150 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:46.150 +++ basename /dev/fd/62 00:05:46.150 ++ mktemp /tmp/62.XXX 00:05:46.150 + tmp_file_1=/tmp/62.8Xw 00:05:46.150 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.150 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.150 + tmp_file_2=/tmp/spdk_tgt_config.json.0Oq 00:05:46.150 + ret=0 00:05:46.150 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:46.408 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:46.408 + diff -u /tmp/62.8Xw /tmp/spdk_tgt_config.json.0Oq 00:05:46.408 + ret=1 00:05:46.408 + echo '=== Start of file: /tmp/62.8Xw ===' 00:05:46.408 + cat /tmp/62.8Xw 00:05:46.408 + echo '=== End of file: /tmp/62.8Xw ===' 00:05:46.408 + echo '' 00:05:46.408 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0Oq ===' 00:05:46.408 + cat /tmp/spdk_tgt_config.json.0Oq 00:05:46.408 + echo '=== End of file: /tmp/spdk_tgt_config.json.0Oq ===' 00:05:46.408 + echo '' 00:05:46.408 + rm /tmp/62.8Xw /tmp/spdk_tgt_config.json.0Oq 00:05:46.408 + exit 1 00:05:46.408 INFO: configuration change detected. 00:05:46.409 04:08:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:46.409 04:08:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:46.409 04:08:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:46.409 04:08:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.409 04:08:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.409 04:08:58 -- json_config/json_config.sh@360 -- # local ret=0 00:05:46.409 04:08:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:46.409 04:08:58 -- json_config/json_config.sh@370 -- # [[ -n 66284 ]] 00:05:46.409 04:08:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:46.409 04:08:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:46.409 04:08:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.409 04:08:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.409 04:08:58 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:46.409 04:08:58 -- json_config/json_config.sh@246 -- # uname -s 00:05:46.409 04:08:58 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:46.409 04:08:58 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:46.409 04:08:58 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:46.409 04:08:58 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:46.409 04:08:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.409 04:08:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.667 04:08:58 -- json_config/json_config.sh@376 -- # killprocess 66284 00:05:46.667 04:08:58 -- common/autotest_common.sh@936 -- # '[' -z 66284 ']' 00:05:46.667 04:08:58 -- common/autotest_common.sh@940 -- # kill -0 66284 00:05:46.667 04:08:58 -- common/autotest_common.sh@941 -- # uname 00:05:46.667 04:08:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.667 04:08:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66284 00:05:46.667 killing process with pid 66284 00:05:46.667 04:08:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.667 04:08:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.667 04:08:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66284' 00:05:46.667 04:08:59 -- common/autotest_common.sh@955 -- # kill 66284 00:05:46.667 04:08:59 -- common/autotest_common.sh@960 -- # wait 66284 00:05:46.927 04:08:59 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.927 04:08:59 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:46.927 04:08:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.927 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.927 INFO: Success 00:05:46.927 04:08:59 -- json_config/json_config.sh@381 -- # return 0 00:05:46.927 04:08:59 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:46.927 00:05:46.927 real 0m8.558s 00:05:46.927 user 0m12.235s 00:05:46.927 sys 0m1.829s 00:05:46.927 ************************************ 00:05:46.927 END TEST json_config 00:05:46.927 ************************************ 00:05:46.927 04:08:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.927 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.927 04:08:59 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.927 04:08:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.927 04:08:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.927 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.927 ************************************ 00:05:46.927 START TEST json_config_extra_key 00:05:46.927 ************************************ 00:05:46.927 04:08:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.927 04:08:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.927 04:08:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.927 04:08:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.927 04:08:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.927 04:08:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.927 04:08:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.927 04:08:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.927 04:08:59 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.927 04:08:59 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.927 04:08:59 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.927 04:08:59 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.927 04:08:59 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.927 04:08:59 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.927 04:08:59 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.927 04:08:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.927 04:08:59 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.927 04:08:59 -- scripts/common.sh@344 -- # : 1 00:05:46.927 04:08:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.927 04:08:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.927 04:08:59 -- scripts/common.sh@364 -- # decimal 1 00:05:46.927 04:08:59 -- scripts/common.sh@352 -- # local d=1 00:05:46.927 04:08:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.927 04:08:59 -- scripts/common.sh@354 -- # echo 1 00:05:46.927 04:08:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.187 04:08:59 -- scripts/common.sh@365 -- # decimal 2 00:05:47.187 04:08:59 -- scripts/common.sh@352 -- # local d=2 00:05:47.187 04:08:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.187 04:08:59 -- scripts/common.sh@354 -- # echo 2 00:05:47.187 04:08:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.187 04:08:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.187 04:08:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.187 04:08:59 -- scripts/common.sh@367 -- # return 0 00:05:47.187 04:08:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.187 04:08:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.187 --rc genhtml_branch_coverage=1 00:05:47.187 --rc genhtml_function_coverage=1 00:05:47.187 --rc genhtml_legend=1 00:05:47.187 --rc geninfo_all_blocks=1 00:05:47.187 --rc geninfo_unexecuted_blocks=1 00:05:47.187 00:05:47.187 ' 00:05:47.187 04:08:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.187 --rc genhtml_branch_coverage=1 00:05:47.187 --rc genhtml_function_coverage=1 00:05:47.187 --rc genhtml_legend=1 00:05:47.187 --rc geninfo_all_blocks=1 00:05:47.187 --rc geninfo_unexecuted_blocks=1 00:05:47.187 00:05:47.187 ' 00:05:47.187 04:08:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.187 --rc genhtml_branch_coverage=1 00:05:47.187 --rc genhtml_function_coverage=1 00:05:47.187 --rc genhtml_legend=1 00:05:47.187 --rc geninfo_all_blocks=1 00:05:47.187 --rc geninfo_unexecuted_blocks=1 00:05:47.187 00:05:47.187 ' 00:05:47.188 04:08:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.188 --rc genhtml_branch_coverage=1 00:05:47.188 --rc genhtml_function_coverage=1 00:05:47.188 --rc genhtml_legend=1 00:05:47.188 --rc geninfo_all_blocks=1 00:05:47.188 --rc geninfo_unexecuted_blocks=1 00:05:47.188 00:05:47.188 ' 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:47.188 04:08:59 -- nvmf/common.sh@7 -- # uname -s 00:05:47.188 04:08:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.188 04:08:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.188 04:08:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.188 04:08:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.188 04:08:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.188 04:08:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.188 04:08:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.188 04:08:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.188 04:08:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.188 04:08:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.188 04:08:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:05:47.188 04:08:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:05:47.188 04:08:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.188 04:08:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.188 04:08:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.188 04:08:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:47.188 04:08:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.188 04:08:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.188 04:08:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.188 04:08:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.188 04:08:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.188 04:08:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.188 04:08:59 -- paths/export.sh@5 -- # export PATH 00:05:47.188 04:08:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.188 04:08:59 -- nvmf/common.sh@46 -- # : 0 00:05:47.188 04:08:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:47.188 04:08:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:47.188 04:08:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:47.188 04:08:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.188 04:08:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.188 04:08:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:47.188 04:08:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:47.188 04:08:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.188 INFO: launching applications... 00:05:47.188 Waiting for target to run... 00:05:47.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66437 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66437 /var/tmp/spdk_tgt.sock 00:05:47.188 04:08:59 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:47.188 04:08:59 -- common/autotest_common.sh@829 -- # '[' -z 66437 ']' 00:05:47.188 04:08:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.188 04:08:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.188 04:08:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.188 04:08:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.188 04:08:59 -- common/autotest_common.sh@10 -- # set +x 00:05:47.188 [2024-12-06 04:08:59.618564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.188 [2024-12-06 04:08:59.619153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66437 ] 00:05:47.756 [2024-12-06 04:09:00.071958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.756 [2024-12-06 04:09:00.133169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.756 [2024-12-06 04:09:00.133604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.325 04:09:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.325 04:09:00 -- common/autotest_common.sh@862 -- # return 0 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:48.325 00:05:48.325 INFO: shutting down applications... 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66437 ]] 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66437 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66437 00:05:48.325 04:09:00 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66437 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:48.584 SPDK target shutdown done 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:48.584 Success 00:05:48.584 04:09:01 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:48.584 00:05:48.584 real 0m1.788s 00:05:48.584 user 0m1.663s 00:05:48.584 sys 0m0.495s 00:05:48.584 04:09:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.584 ************************************ 00:05:48.584 END TEST json_config_extra_key 00:05:48.584 ************************************ 00:05:48.584 04:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:48.843 04:09:01 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.843 04:09:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.843 04:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:48.843 ************************************ 00:05:48.843 START TEST alias_rpc 00:05:48.843 ************************************ 00:05:48.843 04:09:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.843 * Looking for test storage... 00:05:48.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.843 04:09:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:48.843 04:09:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:48.843 04:09:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:48.843 04:09:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:48.843 04:09:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:48.843 04:09:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:48.843 04:09:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:48.843 04:09:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:48.843 04:09:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.843 04:09:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:48.843 04:09:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:48.843 04:09:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:48.843 04:09:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:48.843 04:09:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:48.843 04:09:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:48.843 04:09:01 -- scripts/common.sh@344 -- # : 1 00:05:48.843 04:09:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:48.843 04:09:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.843 04:09:01 -- scripts/common.sh@364 -- # decimal 1 00:05:48.843 04:09:01 -- scripts/common.sh@352 -- # local d=1 00:05:48.843 04:09:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.843 04:09:01 -- scripts/common.sh@354 -- # echo 1 00:05:48.843 04:09:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:48.843 04:09:01 -- scripts/common.sh@365 -- # decimal 2 00:05:48.843 04:09:01 -- scripts/common.sh@352 -- # local d=2 00:05:48.843 04:09:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.843 04:09:01 -- scripts/common.sh@354 -- # echo 2 00:05:48.843 04:09:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:48.843 04:09:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:48.843 04:09:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:48.843 04:09:01 -- scripts/common.sh@367 -- # return 0 00:05:48.843 04:09:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.843 --rc genhtml_branch_coverage=1 00:05:48.843 --rc genhtml_function_coverage=1 00:05:48.843 --rc genhtml_legend=1 00:05:48.843 --rc geninfo_all_blocks=1 00:05:48.843 --rc geninfo_unexecuted_blocks=1 00:05:48.843 00:05:48.843 ' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.843 --rc genhtml_branch_coverage=1 00:05:48.843 --rc genhtml_function_coverage=1 00:05:48.843 --rc genhtml_legend=1 00:05:48.843 --rc geninfo_all_blocks=1 00:05:48.843 --rc geninfo_unexecuted_blocks=1 00:05:48.843 00:05:48.843 ' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.843 --rc genhtml_branch_coverage=1 00:05:48.843 --rc genhtml_function_coverage=1 00:05:48.843 --rc genhtml_legend=1 00:05:48.843 --rc geninfo_all_blocks=1 00:05:48.843 --rc geninfo_unexecuted_blocks=1 00:05:48.843 00:05:48.843 ' 00:05:48.843 04:09:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:48.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.843 --rc genhtml_branch_coverage=1 00:05:48.843 --rc genhtml_function_coverage=1 00:05:48.843 --rc genhtml_legend=1 00:05:48.843 --rc geninfo_all_blocks=1 00:05:48.843 --rc geninfo_unexecuted_blocks=1 00:05:48.843 00:05:48.843 ' 00:05:48.843 04:09:01 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.843 04:09:01 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66514 00:05:48.843 04:09:01 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66514 00:05:48.843 04:09:01 -- common/autotest_common.sh@829 -- # '[' -z 66514 ']' 00:05:48.843 04:09:01 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.843 04:09:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.843 04:09:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.843 04:09:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.843 04:09:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.843 04:09:01 -- common/autotest_common.sh@10 -- # set +x 00:05:49.102 [2024-12-06 04:09:01.444792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.102 [2024-12-06 04:09:01.445321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66514 ] 00:05:49.102 [2024-12-06 04:09:01.580468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.363 [2024-12-06 04:09:01.671126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.363 [2024-12-06 04:09:01.671708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.929 04:09:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.929 04:09:02 -- common/autotest_common.sh@862 -- # return 0 00:05:49.929 04:09:02 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:50.187 04:09:02 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66514 00:05:50.187 04:09:02 -- common/autotest_common.sh@936 -- # '[' -z 66514 ']' 00:05:50.187 04:09:02 -- common/autotest_common.sh@940 -- # kill -0 66514 00:05:50.187 04:09:02 -- common/autotest_common.sh@941 -- # uname 00:05:50.187 04:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.187 04:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66514 00:05:50.445 killing process with pid 66514 00:05:50.445 04:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.445 04:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.445 04:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66514' 00:05:50.445 04:09:02 -- common/autotest_common.sh@955 -- # kill 66514 00:05:50.445 04:09:02 -- common/autotest_common.sh@960 -- # wait 66514 00:05:50.705 ************************************ 00:05:50.705 END TEST alias_rpc 00:05:50.705 ************************************ 00:05:50.705 00:05:50.705 real 0m1.955s 00:05:50.705 user 0m2.188s 00:05:50.705 sys 0m0.481s 00:05:50.705 04:09:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.705 04:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 04:09:03 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:50.705 04:09:03 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.705 04:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.705 04:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.705 04:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 START TEST spdkcli_tcp 00:05:50.705 ************************************ 00:05:50.705 04:09:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.987 * Looking for test storage... 00:05:50.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:50.987 04:09:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.987 04:09:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.987 04:09:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.987 04:09:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.987 04:09:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.987 04:09:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.987 04:09:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.987 04:09:03 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.987 04:09:03 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.987 04:09:03 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.987 04:09:03 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.987 04:09:03 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.987 04:09:03 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.987 04:09:03 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.987 04:09:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.987 04:09:03 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.987 04:09:03 -- scripts/common.sh@344 -- # : 1 00:05:50.987 04:09:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.987 04:09:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.987 04:09:03 -- scripts/common.sh@364 -- # decimal 1 00:05:50.987 04:09:03 -- scripts/common.sh@352 -- # local d=1 00:05:50.987 04:09:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.987 04:09:03 -- scripts/common.sh@354 -- # echo 1 00:05:50.987 04:09:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.987 04:09:03 -- scripts/common.sh@365 -- # decimal 2 00:05:50.987 04:09:03 -- scripts/common.sh@352 -- # local d=2 00:05:50.987 04:09:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.987 04:09:03 -- scripts/common.sh@354 -- # echo 2 00:05:50.987 04:09:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.987 04:09:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.987 04:09:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.987 04:09:03 -- scripts/common.sh@367 -- # return 0 00:05:50.987 04:09:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.987 04:09:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.987 --rc genhtml_branch_coverage=1 00:05:50.987 --rc genhtml_function_coverage=1 00:05:50.987 --rc genhtml_legend=1 00:05:50.987 --rc geninfo_all_blocks=1 00:05:50.987 --rc geninfo_unexecuted_blocks=1 00:05:50.987 00:05:50.987 ' 00:05:50.987 04:09:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.987 --rc genhtml_branch_coverage=1 00:05:50.987 --rc genhtml_function_coverage=1 00:05:50.987 --rc genhtml_legend=1 00:05:50.987 --rc geninfo_all_blocks=1 00:05:50.987 --rc geninfo_unexecuted_blocks=1 00:05:50.987 00:05:50.987 ' 00:05:50.987 04:09:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.987 --rc genhtml_branch_coverage=1 00:05:50.987 --rc genhtml_function_coverage=1 00:05:50.987 --rc genhtml_legend=1 00:05:50.987 --rc geninfo_all_blocks=1 00:05:50.987 --rc geninfo_unexecuted_blocks=1 00:05:50.987 00:05:50.987 ' 00:05:50.987 04:09:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.987 --rc genhtml_branch_coverage=1 00:05:50.987 --rc genhtml_function_coverage=1 00:05:50.987 --rc genhtml_legend=1 00:05:50.987 --rc geninfo_all_blocks=1 00:05:50.987 --rc geninfo_unexecuted_blocks=1 00:05:50.987 00:05:50.987 ' 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:50.987 04:09:03 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:50.987 04:09:03 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.987 04:09:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.987 04:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66597 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@27 -- # waitforlisten 66597 00:05:50.987 04:09:03 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.987 04:09:03 -- common/autotest_common.sh@829 -- # '[' -z 66597 ']' 00:05:50.987 04:09:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.987 04:09:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.987 04:09:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.987 04:09:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.987 04:09:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.987 [2024-12-06 04:09:03.466355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.987 [2024-12-06 04:09:03.466713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66597 ] 00:05:51.246 [2024-12-06 04:09:03.600952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.246 [2024-12-06 04:09:03.679532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.246 [2024-12-06 04:09:03.680127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.246 [2024-12-06 04:09:03.680138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.183 04:09:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.183 04:09:04 -- common/autotest_common.sh@862 -- # return 0 00:05:52.183 04:09:04 -- spdkcli/tcp.sh@31 -- # socat_pid=66614 00:05:52.183 04:09:04 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.183 04:09:04 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.183 [ 00:05:52.183 "bdev_malloc_delete", 00:05:52.183 "bdev_malloc_create", 00:05:52.183 "bdev_null_resize", 00:05:52.183 "bdev_null_delete", 00:05:52.183 "bdev_null_create", 00:05:52.183 "bdev_nvme_cuse_unregister", 00:05:52.183 "bdev_nvme_cuse_register", 00:05:52.183 "bdev_opal_new_user", 00:05:52.183 "bdev_opal_set_lock_state", 00:05:52.183 "bdev_opal_delete", 00:05:52.183 "bdev_opal_get_info", 00:05:52.183 "bdev_opal_create", 00:05:52.183 "bdev_nvme_opal_revert", 00:05:52.183 "bdev_nvme_opal_init", 00:05:52.183 "bdev_nvme_send_cmd", 00:05:52.183 "bdev_nvme_get_path_iostat", 00:05:52.183 "bdev_nvme_get_mdns_discovery_info", 00:05:52.183 "bdev_nvme_stop_mdns_discovery", 00:05:52.183 "bdev_nvme_start_mdns_discovery", 00:05:52.183 "bdev_nvme_set_multipath_policy", 00:05:52.183 "bdev_nvme_set_preferred_path", 00:05:52.183 "bdev_nvme_get_io_paths", 00:05:52.183 "bdev_nvme_remove_error_injection", 00:05:52.183 "bdev_nvme_add_error_injection", 00:05:52.183 "bdev_nvme_get_discovery_info", 00:05:52.183 "bdev_nvme_stop_discovery", 00:05:52.183 "bdev_nvme_start_discovery", 00:05:52.183 "bdev_nvme_get_controller_health_info", 00:05:52.183 "bdev_nvme_disable_controller", 00:05:52.183 "bdev_nvme_enable_controller", 00:05:52.183 "bdev_nvme_reset_controller", 00:05:52.183 "bdev_nvme_get_transport_statistics", 00:05:52.183 "bdev_nvme_apply_firmware", 00:05:52.183 "bdev_nvme_detach_controller", 00:05:52.183 "bdev_nvme_get_controllers", 00:05:52.183 "bdev_nvme_attach_controller", 00:05:52.183 "bdev_nvme_set_hotplug", 00:05:52.183 "bdev_nvme_set_options", 00:05:52.183 "bdev_passthru_delete", 00:05:52.183 "bdev_passthru_create", 00:05:52.183 "bdev_lvol_grow_lvstore", 00:05:52.183 "bdev_lvol_get_lvols", 00:05:52.183 "bdev_lvol_get_lvstores", 00:05:52.183 "bdev_lvol_delete", 00:05:52.183 "bdev_lvol_set_read_only", 00:05:52.183 "bdev_lvol_resize", 00:05:52.183 "bdev_lvol_decouple_parent", 00:05:52.183 "bdev_lvol_inflate", 00:05:52.183 "bdev_lvol_rename", 00:05:52.183 "bdev_lvol_clone_bdev", 00:05:52.183 "bdev_lvol_clone", 00:05:52.183 "bdev_lvol_snapshot", 00:05:52.183 "bdev_lvol_create", 00:05:52.183 "bdev_lvol_delete_lvstore", 00:05:52.183 "bdev_lvol_rename_lvstore", 00:05:52.183 "bdev_lvol_create_lvstore", 00:05:52.184 "bdev_raid_set_options", 00:05:52.184 "bdev_raid_remove_base_bdev", 00:05:52.184 "bdev_raid_add_base_bdev", 00:05:52.184 "bdev_raid_delete", 00:05:52.184 "bdev_raid_create", 00:05:52.184 "bdev_raid_get_bdevs", 00:05:52.184 "bdev_error_inject_error", 00:05:52.184 "bdev_error_delete", 00:05:52.184 "bdev_error_create", 00:05:52.184 "bdev_split_delete", 00:05:52.184 "bdev_split_create", 00:05:52.184 "bdev_delay_delete", 00:05:52.184 "bdev_delay_create", 00:05:52.184 "bdev_delay_update_latency", 00:05:52.184 "bdev_zone_block_delete", 00:05:52.184 "bdev_zone_block_create", 00:05:52.184 "blobfs_create", 00:05:52.184 "blobfs_detect", 00:05:52.184 "blobfs_set_cache_size", 00:05:52.184 "bdev_aio_delete", 00:05:52.184 "bdev_aio_rescan", 00:05:52.184 "bdev_aio_create", 00:05:52.184 "bdev_ftl_set_property", 00:05:52.184 "bdev_ftl_get_properties", 00:05:52.184 "bdev_ftl_get_stats", 00:05:52.184 "bdev_ftl_unmap", 00:05:52.184 "bdev_ftl_unload", 00:05:52.184 "bdev_ftl_delete", 00:05:52.184 "bdev_ftl_load", 00:05:52.184 "bdev_ftl_create", 00:05:52.184 "bdev_virtio_attach_controller", 00:05:52.184 "bdev_virtio_scsi_get_devices", 00:05:52.184 "bdev_virtio_detach_controller", 00:05:52.184 "bdev_virtio_blk_set_hotplug", 00:05:52.184 "bdev_iscsi_delete", 00:05:52.184 "bdev_iscsi_create", 00:05:52.184 "bdev_iscsi_set_options", 00:05:52.184 "bdev_uring_delete", 00:05:52.184 "bdev_uring_create", 00:05:52.184 "accel_error_inject_error", 00:05:52.184 "ioat_scan_accel_module", 00:05:52.184 "dsa_scan_accel_module", 00:05:52.184 "iaa_scan_accel_module", 00:05:52.184 "iscsi_set_options", 00:05:52.184 "iscsi_get_auth_groups", 00:05:52.184 "iscsi_auth_group_remove_secret", 00:05:52.184 "iscsi_auth_group_add_secret", 00:05:52.184 "iscsi_delete_auth_group", 00:05:52.184 "iscsi_create_auth_group", 00:05:52.184 "iscsi_set_discovery_auth", 00:05:52.184 "iscsi_get_options", 00:05:52.184 "iscsi_target_node_request_logout", 00:05:52.184 "iscsi_target_node_set_redirect", 00:05:52.184 "iscsi_target_node_set_auth", 00:05:52.184 "iscsi_target_node_add_lun", 00:05:52.184 "iscsi_get_connections", 00:05:52.184 "iscsi_portal_group_set_auth", 00:05:52.184 "iscsi_start_portal_group", 00:05:52.184 "iscsi_delete_portal_group", 00:05:52.184 "iscsi_create_portal_group", 00:05:52.184 "iscsi_get_portal_groups", 00:05:52.184 "iscsi_delete_target_node", 00:05:52.184 "iscsi_target_node_remove_pg_ig_maps", 00:05:52.184 "iscsi_target_node_add_pg_ig_maps", 00:05:52.184 "iscsi_create_target_node", 00:05:52.184 "iscsi_get_target_nodes", 00:05:52.184 "iscsi_delete_initiator_group", 00:05:52.184 "iscsi_initiator_group_remove_initiators", 00:05:52.184 "iscsi_initiator_group_add_initiators", 00:05:52.184 "iscsi_create_initiator_group", 00:05:52.184 "iscsi_get_initiator_groups", 00:05:52.184 "nvmf_set_crdt", 00:05:52.184 "nvmf_set_config", 00:05:52.184 "nvmf_set_max_subsystems", 00:05:52.184 "nvmf_subsystem_get_listeners", 00:05:52.184 "nvmf_subsystem_get_qpairs", 00:05:52.184 "nvmf_subsystem_get_controllers", 00:05:52.184 "nvmf_get_stats", 00:05:52.184 "nvmf_get_transports", 00:05:52.184 "nvmf_create_transport", 00:05:52.184 "nvmf_get_targets", 00:05:52.184 "nvmf_delete_target", 00:05:52.184 "nvmf_create_target", 00:05:52.184 "nvmf_subsystem_allow_any_host", 00:05:52.184 "nvmf_subsystem_remove_host", 00:05:52.184 "nvmf_subsystem_add_host", 00:05:52.184 "nvmf_subsystem_remove_ns", 00:05:52.184 "nvmf_subsystem_add_ns", 00:05:52.184 "nvmf_subsystem_listener_set_ana_state", 00:05:52.184 "nvmf_discovery_get_referrals", 00:05:52.184 "nvmf_discovery_remove_referral", 00:05:52.184 "nvmf_discovery_add_referral", 00:05:52.184 "nvmf_subsystem_remove_listener", 00:05:52.184 "nvmf_subsystem_add_listener", 00:05:52.184 "nvmf_delete_subsystem", 00:05:52.184 "nvmf_create_subsystem", 00:05:52.184 "nvmf_get_subsystems", 00:05:52.184 "env_dpdk_get_mem_stats", 00:05:52.184 "nbd_get_disks", 00:05:52.184 "nbd_stop_disk", 00:05:52.184 "nbd_start_disk", 00:05:52.184 "ublk_recover_disk", 00:05:52.184 "ublk_get_disks", 00:05:52.184 "ublk_stop_disk", 00:05:52.184 "ublk_start_disk", 00:05:52.184 "ublk_destroy_target", 00:05:52.184 "ublk_create_target", 00:05:52.184 "virtio_blk_create_transport", 00:05:52.184 "virtio_blk_get_transports", 00:05:52.184 "vhost_controller_set_coalescing", 00:05:52.184 "vhost_get_controllers", 00:05:52.184 "vhost_delete_controller", 00:05:52.184 "vhost_create_blk_controller", 00:05:52.184 "vhost_scsi_controller_remove_target", 00:05:52.184 "vhost_scsi_controller_add_target", 00:05:52.184 "vhost_start_scsi_controller", 00:05:52.184 "vhost_create_scsi_controller", 00:05:52.184 "thread_set_cpumask", 00:05:52.184 "framework_get_scheduler", 00:05:52.184 "framework_set_scheduler", 00:05:52.184 "framework_get_reactors", 00:05:52.184 "thread_get_io_channels", 00:05:52.184 "thread_get_pollers", 00:05:52.184 "thread_get_stats", 00:05:52.184 "framework_monitor_context_switch", 00:05:52.184 "spdk_kill_instance", 00:05:52.184 "log_enable_timestamps", 00:05:52.184 "log_get_flags", 00:05:52.184 "log_clear_flag", 00:05:52.184 "log_set_flag", 00:05:52.184 "log_get_level", 00:05:52.184 "log_set_level", 00:05:52.184 "log_get_print_level", 00:05:52.184 "log_set_print_level", 00:05:52.184 "framework_enable_cpumask_locks", 00:05:52.184 "framework_disable_cpumask_locks", 00:05:52.184 "framework_wait_init", 00:05:52.184 "framework_start_init", 00:05:52.184 "scsi_get_devices", 00:05:52.184 "bdev_get_histogram", 00:05:52.184 "bdev_enable_histogram", 00:05:52.184 "bdev_set_qos_limit", 00:05:52.184 "bdev_set_qd_sampling_period", 00:05:52.184 "bdev_get_bdevs", 00:05:52.184 "bdev_reset_iostat", 00:05:52.184 "bdev_get_iostat", 00:05:52.184 "bdev_examine", 00:05:52.184 "bdev_wait_for_examine", 00:05:52.184 "bdev_set_options", 00:05:52.184 "notify_get_notifications", 00:05:52.184 "notify_get_types", 00:05:52.184 "accel_get_stats", 00:05:52.184 "accel_set_options", 00:05:52.184 "accel_set_driver", 00:05:52.184 "accel_crypto_key_destroy", 00:05:52.184 "accel_crypto_keys_get", 00:05:52.184 "accel_crypto_key_create", 00:05:52.184 "accel_assign_opc", 00:05:52.184 "accel_get_module_info", 00:05:52.184 "accel_get_opc_assignments", 00:05:52.184 "vmd_rescan", 00:05:52.184 "vmd_remove_device", 00:05:52.184 "vmd_enable", 00:05:52.184 "sock_set_default_impl", 00:05:52.184 "sock_impl_set_options", 00:05:52.184 "sock_impl_get_options", 00:05:52.184 "iobuf_get_stats", 00:05:52.184 "iobuf_set_options", 00:05:52.184 "framework_get_pci_devices", 00:05:52.184 "framework_get_config", 00:05:52.184 "framework_get_subsystems", 00:05:52.184 "trace_get_info", 00:05:52.185 "trace_get_tpoint_group_mask", 00:05:52.185 "trace_disable_tpoint_group", 00:05:52.185 "trace_enable_tpoint_group", 00:05:52.185 "trace_clear_tpoint_mask", 00:05:52.185 "trace_set_tpoint_mask", 00:05:52.185 "spdk_get_version", 00:05:52.185 "rpc_get_methods" 00:05:52.185 ] 00:05:52.443 04:09:04 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:52.443 04:09:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.443 04:09:04 -- common/autotest_common.sh@10 -- # set +x 00:05:52.443 04:09:04 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:52.443 04:09:04 -- spdkcli/tcp.sh@38 -- # killprocess 66597 00:05:52.443 04:09:04 -- common/autotest_common.sh@936 -- # '[' -z 66597 ']' 00:05:52.443 04:09:04 -- common/autotest_common.sh@940 -- # kill -0 66597 00:05:52.443 04:09:04 -- common/autotest_common.sh@941 -- # uname 00:05:52.443 04:09:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.443 04:09:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66597 00:05:52.443 killing process with pid 66597 00:05:52.443 04:09:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.443 04:09:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.443 04:09:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66597' 00:05:52.443 04:09:04 -- common/autotest_common.sh@955 -- # kill 66597 00:05:52.443 04:09:04 -- common/autotest_common.sh@960 -- # wait 66597 00:05:53.023 ************************************ 00:05:53.023 END TEST spdkcli_tcp 00:05:53.023 ************************************ 00:05:53.023 00:05:53.023 real 0m2.173s 00:05:53.023 user 0m3.984s 00:05:53.023 sys 0m0.542s 00:05:53.023 04:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.023 04:09:05 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 04:09:05 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.023 04:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.023 04:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.023 04:09:05 -- common/autotest_common.sh@10 -- # set +x 00:05:53.023 ************************************ 00:05:53.023 START TEST dpdk_mem_utility 00:05:53.023 ************************************ 00:05:53.023 04:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.023 * Looking for test storage... 00:05:53.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:53.023 04:09:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.023 04:09:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.023 04:09:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.281 04:09:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.281 04:09:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.281 04:09:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.281 04:09:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.281 04:09:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.281 04:09:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.281 04:09:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.281 04:09:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.281 04:09:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.281 04:09:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.281 04:09:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.281 04:09:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.281 04:09:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.281 04:09:05 -- scripts/common.sh@344 -- # : 1 00:05:53.281 04:09:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.281 04:09:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.281 04:09:05 -- scripts/common.sh@364 -- # decimal 1 00:05:53.281 04:09:05 -- scripts/common.sh@352 -- # local d=1 00:05:53.281 04:09:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.281 04:09:05 -- scripts/common.sh@354 -- # echo 1 00:05:53.281 04:09:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.281 04:09:05 -- scripts/common.sh@365 -- # decimal 2 00:05:53.281 04:09:05 -- scripts/common.sh@352 -- # local d=2 00:05:53.281 04:09:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.281 04:09:05 -- scripts/common.sh@354 -- # echo 2 00:05:53.281 04:09:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.281 04:09:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.281 04:09:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.281 04:09:05 -- scripts/common.sh@367 -- # return 0 00:05:53.281 04:09:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.281 04:09:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.281 --rc genhtml_branch_coverage=1 00:05:53.281 --rc genhtml_function_coverage=1 00:05:53.281 --rc genhtml_legend=1 00:05:53.281 --rc geninfo_all_blocks=1 00:05:53.281 --rc geninfo_unexecuted_blocks=1 00:05:53.281 00:05:53.281 ' 00:05:53.281 04:09:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.281 --rc genhtml_branch_coverage=1 00:05:53.281 --rc genhtml_function_coverage=1 00:05:53.281 --rc genhtml_legend=1 00:05:53.281 --rc geninfo_all_blocks=1 00:05:53.281 --rc geninfo_unexecuted_blocks=1 00:05:53.281 00:05:53.281 ' 00:05:53.281 04:09:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.281 --rc genhtml_branch_coverage=1 00:05:53.281 --rc genhtml_function_coverage=1 00:05:53.281 --rc genhtml_legend=1 00:05:53.281 --rc geninfo_all_blocks=1 00:05:53.281 --rc geninfo_unexecuted_blocks=1 00:05:53.281 00:05:53.281 ' 00:05:53.281 04:09:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.281 --rc genhtml_branch_coverage=1 00:05:53.281 --rc genhtml_function_coverage=1 00:05:53.281 --rc genhtml_legend=1 00:05:53.281 --rc geninfo_all_blocks=1 00:05:53.281 --rc geninfo_unexecuted_blocks=1 00:05:53.281 00:05:53.281 ' 00:05:53.281 04:09:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.281 04:09:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66695 00:05:53.281 04:09:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.281 04:09:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66695 00:05:53.281 04:09:05 -- common/autotest_common.sh@829 -- # '[' -z 66695 ']' 00:05:53.281 04:09:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.281 04:09:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.281 04:09:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.281 04:09:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.281 04:09:05 -- common/autotest_common.sh@10 -- # set +x 00:05:53.281 [2024-12-06 04:09:05.676549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.281 [2024-12-06 04:09:05.677299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66695 ] 00:05:53.281 [2024-12-06 04:09:05.820161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.538 [2024-12-06 04:09:05.945221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.538 [2024-12-06 04:09:05.945711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.475 04:09:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.475 04:09:06 -- common/autotest_common.sh@862 -- # return 0 00:05:54.475 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.475 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.475 04:09:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.475 04:09:06 -- common/autotest_common.sh@10 -- # set +x 00:05:54.475 { 00:05:54.475 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.475 } 00:05:54.475 04:09:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.475 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.475 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.475 1 heaps totaling size 814.000000 MiB 00:05:54.475 size: 814.000000 MiB heap id: 0 00:05:54.475 end heaps---------- 00:05:54.475 8 mempools totaling size 598.116089 MiB 00:05:54.475 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.475 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.475 size: 84.521057 MiB name: bdev_io_66695 00:05:54.475 size: 51.011292 MiB name: evtpool_66695 00:05:54.475 size: 50.003479 MiB name: msgpool_66695 00:05:54.475 size: 21.763794 MiB name: PDU_Pool 00:05:54.475 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.475 size: 0.026123 MiB name: Session_Pool 00:05:54.475 end mempools------- 00:05:54.475 6 memzones totaling size 4.142822 MiB 00:05:54.475 size: 1.000366 MiB name: RG_ring_0_66695 00:05:54.475 size: 1.000366 MiB name: RG_ring_1_66695 00:05:54.475 size: 1.000366 MiB name: RG_ring_4_66695 00:05:54.475 size: 1.000366 MiB name: RG_ring_5_66695 00:05:54.475 size: 0.125366 MiB name: RG_ring_2_66695 00:05:54.475 size: 0.015991 MiB name: RG_ring_3_66695 00:05:54.475 end memzones------- 00:05:54.475 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.475 heap id: 0 total size: 814.000000 MiB number of busy elements: 304 number of free elements: 15 00:05:54.475 list of free elements. size: 12.471191 MiB 00:05:54.475 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.475 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.475 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.475 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.475 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.475 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.475 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.475 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.475 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:54.475 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:05:54.475 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:54.475 element at address: 0x200000800000 with size: 0.486328 MiB 00:05:54.475 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.475 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:54.475 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:54.475 list of standard malloc elements. size: 199.266235 MiB 00:05:54.475 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.475 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.475 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.475 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.475 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.475 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.475 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.475 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.475 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.475 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.475 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:54.475 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.476 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:54.476 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.477 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.477 list of memzone associated elements. size: 602.262573 MiB 00:05:54.477 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.477 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.477 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.477 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.477 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.477 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66695_0 00:05:54.477 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.477 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66695_0 00:05:54.477 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.477 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66695_0 00:05:54.477 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.477 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.477 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.477 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.477 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.477 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66695 00:05:54.477 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.477 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66695 00:05:54.477 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.477 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66695 00:05:54.477 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.477 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.477 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.477 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.477 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.477 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.477 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.477 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66695 00:05:54.477 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.477 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66695 00:05:54.477 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.477 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66695 00:05:54.477 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.477 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66695 00:05:54.477 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:54.477 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66695 00:05:54.477 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.477 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.477 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.477 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.477 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.477 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.477 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:54.477 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66695 00:05:54.477 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.477 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.477 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:54.477 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.477 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:54.477 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66695 00:05:54.477 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:54.477 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.477 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:54.477 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66695 00:05:54.477 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:54.477 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66695 00:05:54.477 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:54.477 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.477 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.477 04:09:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66695 00:05:54.477 04:09:06 -- common/autotest_common.sh@936 -- # '[' -z 66695 ']' 00:05:54.477 04:09:06 -- common/autotest_common.sh@940 -- # kill -0 66695 00:05:54.477 04:09:06 -- common/autotest_common.sh@941 -- # uname 00:05:54.477 04:09:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.477 04:09:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66695 00:05:54.477 killing process with pid 66695 00:05:54.477 04:09:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.477 04:09:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.477 04:09:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66695' 00:05:54.477 04:09:06 -- common/autotest_common.sh@955 -- # kill 66695 00:05:54.477 04:09:06 -- common/autotest_common.sh@960 -- # wait 66695 00:05:55.043 00:05:55.043 real 0m2.033s 00:05:55.043 user 0m2.125s 00:05:55.043 sys 0m0.551s 00:05:55.043 04:09:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.043 ************************************ 00:05:55.043 END TEST dpdk_mem_utility 00:05:55.043 ************************************ 00:05:55.043 04:09:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.043 04:09:07 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.043 04:09:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.043 04:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.043 04:09:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.043 ************************************ 00:05:55.043 START TEST event 00:05:55.044 ************************************ 00:05:55.044 04:09:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:55.044 * Looking for test storage... 00:05:55.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:55.044 04:09:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.044 04:09:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.044 04:09:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.302 04:09:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.302 04:09:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.302 04:09:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.302 04:09:07 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.302 04:09:07 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.302 04:09:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.302 04:09:07 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.302 04:09:07 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.302 04:09:07 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.302 04:09:07 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.302 04:09:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.302 04:09:07 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.302 04:09:07 -- scripts/common.sh@344 -- # : 1 00:05:55.302 04:09:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.302 04:09:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.302 04:09:07 -- scripts/common.sh@364 -- # decimal 1 00:05:55.302 04:09:07 -- scripts/common.sh@352 -- # local d=1 00:05:55.302 04:09:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.302 04:09:07 -- scripts/common.sh@354 -- # echo 1 00:05:55.302 04:09:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.302 04:09:07 -- scripts/common.sh@365 -- # decimal 2 00:05:55.302 04:09:07 -- scripts/common.sh@352 -- # local d=2 00:05:55.302 04:09:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.302 04:09:07 -- scripts/common.sh@354 -- # echo 2 00:05:55.302 04:09:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.302 04:09:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.302 04:09:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.302 04:09:07 -- scripts/common.sh@367 -- # return 0 00:05:55.302 04:09:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.302 --rc genhtml_branch_coverage=1 00:05:55.302 --rc genhtml_function_coverage=1 00:05:55.302 --rc genhtml_legend=1 00:05:55.302 --rc geninfo_all_blocks=1 00:05:55.302 --rc geninfo_unexecuted_blocks=1 00:05:55.302 00:05:55.302 ' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.302 --rc genhtml_branch_coverage=1 00:05:55.302 --rc genhtml_function_coverage=1 00:05:55.302 --rc genhtml_legend=1 00:05:55.302 --rc geninfo_all_blocks=1 00:05:55.302 --rc geninfo_unexecuted_blocks=1 00:05:55.302 00:05:55.302 ' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.302 --rc genhtml_branch_coverage=1 00:05:55.302 --rc genhtml_function_coverage=1 00:05:55.302 --rc genhtml_legend=1 00:05:55.302 --rc geninfo_all_blocks=1 00:05:55.302 --rc geninfo_unexecuted_blocks=1 00:05:55.302 00:05:55.302 ' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.302 --rc genhtml_branch_coverage=1 00:05:55.302 --rc genhtml_function_coverage=1 00:05:55.302 --rc genhtml_legend=1 00:05:55.302 --rc geninfo_all_blocks=1 00:05:55.302 --rc geninfo_unexecuted_blocks=1 00:05:55.302 00:05:55.302 ' 00:05:55.302 04:09:07 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:55.302 04:09:07 -- bdev/nbd_common.sh@6 -- # set -e 00:05:55.302 04:09:07 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.302 04:09:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.302 04:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.302 04:09:07 -- common/autotest_common.sh@10 -- # set +x 00:05:55.302 ************************************ 00:05:55.302 START TEST event_perf 00:05:55.302 ************************************ 00:05:55.302 04:09:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:55.302 Running I/O for 1 seconds...[2024-12-06 04:09:07.727890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.302 [2024-12-06 04:09:07.728339] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66779 ] 00:05:55.562 [2024-12-06 04:09:07.870226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.562 [2024-12-06 04:09:08.003995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.562 [2024-12-06 04:09:08.004171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.562 [2024-12-06 04:09:08.004259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.562 [2024-12-06 04:09:08.004278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.941 Running I/O for 1 seconds... 00:05:56.941 lcore 0: 121267 00:05:56.941 lcore 1: 121268 00:05:56.941 lcore 2: 121269 00:05:56.941 lcore 3: 121270 00:05:56.941 done. 00:05:56.941 00:05:56.941 real 0m1.404s 00:05:56.941 user 0m4.197s 00:05:56.941 sys 0m0.077s 00:05:56.941 04:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.941 04:09:09 -- common/autotest_common.sh@10 -- # set +x 00:05:56.941 ************************************ 00:05:56.941 END TEST event_perf 00:05:56.941 ************************************ 00:05:56.941 04:09:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.941 04:09:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:56.941 04:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.941 04:09:09 -- common/autotest_common.sh@10 -- # set +x 00:05:56.941 ************************************ 00:05:56.941 START TEST event_reactor 00:05:56.941 ************************************ 00:05:56.941 04:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:56.941 [2024-12-06 04:09:09.186817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.941 [2024-12-06 04:09:09.187534] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66812 ] 00:05:56.941 [2024-12-06 04:09:09.321246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.941 [2024-12-06 04:09:09.443685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.321 test_start 00:05:58.321 oneshot 00:05:58.321 tick 100 00:05:58.321 tick 100 00:05:58.321 tick 250 00:05:58.321 tick 100 00:05:58.321 tick 100 00:05:58.321 tick 250 00:05:58.321 tick 500 00:05:58.321 tick 100 00:05:58.321 tick 100 00:05:58.321 tick 100 00:05:58.321 tick 250 00:05:58.321 tick 100 00:05:58.321 tick 100 00:05:58.321 test_end 00:05:58.321 ************************************ 00:05:58.321 END TEST event_reactor 00:05:58.321 ************************************ 00:05:58.321 00:05:58.321 real 0m1.368s 00:05:58.321 user 0m1.186s 00:05:58.321 sys 0m0.072s 00:05:58.321 04:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.321 04:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.321 04:09:10 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.321 04:09:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:58.321 04:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.321 04:09:10 -- common/autotest_common.sh@10 -- # set +x 00:05:58.321 ************************************ 00:05:58.321 START TEST event_reactor_perf 00:05:58.321 ************************************ 00:05:58.321 04:09:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:58.322 [2024-12-06 04:09:10.604457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.322 [2024-12-06 04:09:10.604542] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66853 ] 00:05:58.322 [2024-12-06 04:09:10.740848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.322 [2024-12-06 04:09:10.862235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.701 test_start 00:05:59.701 test_end 00:05:59.701 Performance: 347093 events per second 00:05:59.701 ************************************ 00:05:59.701 END TEST event_reactor_perf 00:05:59.701 ************************************ 00:05:59.701 00:05:59.701 real 0m1.375s 00:05:59.701 user 0m1.191s 00:05:59.701 sys 0m0.073s 00:05:59.701 04:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.701 04:09:11 -- common/autotest_common.sh@10 -- # set +x 00:05:59.701 04:09:12 -- event/event.sh@49 -- # uname -s 00:05:59.701 04:09:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.701 04:09:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.701 04:09:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.701 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.701 ************************************ 00:05:59.701 START TEST event_scheduler 00:05:59.701 ************************************ 00:05:59.701 04:09:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.701 * Looking for test storage... 00:05:59.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:59.701 04:09:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.701 04:09:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.701 04:09:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.701 04:09:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.701 04:09:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.701 04:09:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.701 04:09:12 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.701 04:09:12 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.701 04:09:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.701 04:09:12 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.701 04:09:12 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.701 04:09:12 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.701 04:09:12 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.701 04:09:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.701 04:09:12 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.701 04:09:12 -- scripts/common.sh@344 -- # : 1 00:05:59.701 04:09:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.701 04:09:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.701 04:09:12 -- scripts/common.sh@364 -- # decimal 1 00:05:59.701 04:09:12 -- scripts/common.sh@352 -- # local d=1 00:05:59.701 04:09:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.701 04:09:12 -- scripts/common.sh@354 -- # echo 1 00:05:59.701 04:09:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.701 04:09:12 -- scripts/common.sh@365 -- # decimal 2 00:05:59.701 04:09:12 -- scripts/common.sh@352 -- # local d=2 00:05:59.701 04:09:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.701 04:09:12 -- scripts/common.sh@354 -- # echo 2 00:05:59.701 04:09:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.701 04:09:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.701 04:09:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.701 04:09:12 -- scripts/common.sh@367 -- # return 0 00:05:59.701 04:09:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.701 --rc genhtml_branch_coverage=1 00:05:59.701 --rc genhtml_function_coverage=1 00:05:59.701 --rc genhtml_legend=1 00:05:59.701 --rc geninfo_all_blocks=1 00:05:59.701 --rc geninfo_unexecuted_blocks=1 00:05:59.701 00:05:59.701 ' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.701 --rc genhtml_branch_coverage=1 00:05:59.701 --rc genhtml_function_coverage=1 00:05:59.701 --rc genhtml_legend=1 00:05:59.701 --rc geninfo_all_blocks=1 00:05:59.701 --rc geninfo_unexecuted_blocks=1 00:05:59.701 00:05:59.701 ' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.701 --rc genhtml_branch_coverage=1 00:05:59.701 --rc genhtml_function_coverage=1 00:05:59.701 --rc genhtml_legend=1 00:05:59.701 --rc geninfo_all_blocks=1 00:05:59.701 --rc geninfo_unexecuted_blocks=1 00:05:59.701 00:05:59.701 ' 00:05:59.701 04:09:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.701 --rc genhtml_branch_coverage=1 00:05:59.702 --rc genhtml_function_coverage=1 00:05:59.702 --rc genhtml_legend=1 00:05:59.702 --rc geninfo_all_blocks=1 00:05:59.702 --rc geninfo_unexecuted_blocks=1 00:05:59.702 00:05:59.702 ' 00:05:59.702 04:09:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.702 04:09:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66916 00:05:59.702 04:09:12 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.702 04:09:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.702 04:09:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 66916 00:05:59.702 04:09:12 -- common/autotest_common.sh@829 -- # '[' -z 66916 ']' 00:05:59.702 04:09:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.702 04:09:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.702 04:09:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.702 04:09:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.702 04:09:12 -- common/autotest_common.sh@10 -- # set +x 00:05:59.961 [2024-12-06 04:09:12.267209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.961 [2024-12-06 04:09:12.267582] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66916 ] 00:05:59.961 [2024-12-06 04:09:12.412232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.961 [2024-12-06 04:09:12.514267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.961 [2024-12-06 04:09:12.514352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.961 [2024-12-06 04:09:12.514479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.961 [2024-12-06 04:09:12.514481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.900 04:09:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.900 04:09:13 -- common/autotest_common.sh@862 -- # return 0 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 POWER: Env isn't set yet! 00:06:00.900 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:00.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.900 POWER: Attempting to initialise PSTAT power management... 00:06:00.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.900 POWER: Cannot set governor of lcore 0 to performance 00:06:00.900 POWER: Attempting to initialise AMD PSTATE power management... 00:06:00.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.900 POWER: Attempting to initialise CPPC power management... 00:06:00.900 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.900 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.900 POWER: Attempting to initialise VM power management... 00:06:00.900 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:00.900 POWER: Unable to set Power Management Environment for lcore 0 00:06:00.900 [2024-12-06 04:09:13.285986] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:00.900 [2024-12-06 04:09:13.286000] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:00.900 [2024-12-06 04:09:13.286008] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:00.900 [2024-12-06 04:09:13.286021] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:00.900 [2024-12-06 04:09:13.286028] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:00.900 [2024-12-06 04:09:13.286036] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 [2024-12-06 04:09:13.385474] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:00.900 04:09:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.900 04:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 ************************************ 00:06:00.900 START TEST scheduler_create_thread 00:06:00.900 ************************************ 00:06:00.900 04:09:13 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 2 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 3 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 4 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 5 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.900 04:09:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:00.900 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.900 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.900 6 00:06:00.900 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.901 04:09:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:00.901 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.901 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.901 7 00:06:00.901 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.901 04:09:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:00.901 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.901 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.901 8 00:06:00.901 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.901 04:09:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:00.901 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.901 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 9 00:06:01.160 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:01.160 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.160 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 10 00:06:01.160 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:01.160 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.160 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:01.160 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.160 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.160 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.160 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:01.160 04:09:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.160 04:09:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.160 04:09:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.160 04:09:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.098 ************************************ 00:06:02.098 END TEST scheduler_create_thread 00:06:02.098 ************************************ 00:06:02.098 04:09:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.098 00:06:02.098 real 0m1.169s 00:06:02.098 user 0m0.015s 00:06:02.098 sys 0m0.006s 00:06:02.098 04:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.098 04:09:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.098 04:09:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.098 04:09:14 -- scheduler/scheduler.sh@46 -- # killprocess 66916 00:06:02.098 04:09:14 -- common/autotest_common.sh@936 -- # '[' -z 66916 ']' 00:06:02.098 04:09:14 -- common/autotest_common.sh@940 -- # kill -0 66916 00:06:02.098 04:09:14 -- common/autotest_common.sh@941 -- # uname 00:06:02.098 04:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.098 04:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66916 00:06:02.098 killing process with pid 66916 00:06:02.098 04:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:02.098 04:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:02.098 04:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66916' 00:06:02.098 04:09:14 -- common/autotest_common.sh@955 -- # kill 66916 00:06:02.098 04:09:14 -- common/autotest_common.sh@960 -- # wait 66916 00:06:02.675 [2024-12-06 04:09:15.048018] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.933 00:06:02.933 real 0m3.215s 00:06:02.933 user 0m5.803s 00:06:02.933 sys 0m0.416s 00:06:02.933 04:09:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.933 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.933 ************************************ 00:06:02.933 END TEST event_scheduler 00:06:02.933 ************************************ 00:06:02.933 04:09:15 -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.933 04:09:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.933 04:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.933 04:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.933 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.933 ************************************ 00:06:02.933 START TEST app_repeat 00:06:02.933 ************************************ 00:06:02.933 04:09:15 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:02.934 04:09:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.934 04:09:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.934 04:09:15 -- event/event.sh@13 -- # local nbd_list 00:06:02.934 04:09:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.934 04:09:15 -- event/event.sh@14 -- # local bdev_list 00:06:02.934 04:09:15 -- event/event.sh@15 -- # local repeat_times=4 00:06:02.934 04:09:15 -- event/event.sh@17 -- # modprobe nbd 00:06:02.934 04:09:15 -- event/event.sh@19 -- # repeat_pid=66999 00:06:02.934 04:09:15 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.934 04:09:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.934 Process app_repeat pid: 66999 00:06:02.934 spdk_app_start Round 0 00:06:02.934 04:09:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66999' 00:06:02.934 04:09:15 -- event/event.sh@23 -- # for i in {0..2} 00:06:02.934 04:09:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.934 04:09:15 -- event/event.sh@25 -- # waitforlisten 66999 /var/tmp/spdk-nbd.sock 00:06:02.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.934 04:09:15 -- common/autotest_common.sh@829 -- # '[' -z 66999 ']' 00:06:02.934 04:09:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.934 04:09:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.934 04:09:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.934 04:09:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.934 04:09:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.934 [2024-12-06 04:09:15.328998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.934 [2024-12-06 04:09:15.329086] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66999 ] 00:06:02.934 [2024-12-06 04:09:15.470162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.192 [2024-12-06 04:09:15.587204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.192 [2024-12-06 04:09:15.587226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.128 04:09:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.128 04:09:16 -- common/autotest_common.sh@862 -- # return 0 00:06:04.128 04:09:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.128 Malloc0 00:06:04.128 04:09:16 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.697 Malloc1 00:06:04.697 04:09:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@12 -- # local i 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.697 04:09:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.697 /dev/nbd0 00:06:04.697 04:09:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.697 04:09:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.697 04:09:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.697 04:09:17 -- common/autotest_common.sh@867 -- # local i 00:06:04.697 04:09:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.697 04:09:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.697 04:09:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.697 04:09:17 -- common/autotest_common.sh@871 -- # break 00:06:04.697 04:09:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.697 04:09:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.697 04:09:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.956 1+0 records in 00:06:04.956 1+0 records out 00:06:04.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377753 s, 10.8 MB/s 00:06:04.956 04:09:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.956 04:09:17 -- common/autotest_common.sh@884 -- # size=4096 00:06:04.956 04:09:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.956 04:09:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.956 04:09:17 -- common/autotest_common.sh@887 -- # return 0 00:06:04.956 04:09:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.956 04:09:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.956 04:09:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.215 /dev/nbd1 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.215 04:09:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.215 04:09:17 -- common/autotest_common.sh@867 -- # local i 00:06:05.215 04:09:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.215 04:09:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.215 04:09:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.215 04:09:17 -- common/autotest_common.sh@871 -- # break 00:06:05.215 04:09:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.215 04:09:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.215 04:09:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.215 1+0 records in 00:06:05.215 1+0 records out 00:06:05.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237576 s, 17.2 MB/s 00:06:05.215 04:09:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.215 04:09:17 -- common/autotest_common.sh@884 -- # size=4096 00:06:05.215 04:09:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.215 04:09:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.215 04:09:17 -- common/autotest_common.sh@887 -- # return 0 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.215 04:09:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.474 { 00:06:05.474 "nbd_device": "/dev/nbd0", 00:06:05.474 "bdev_name": "Malloc0" 00:06:05.474 }, 00:06:05.474 { 00:06:05.474 "nbd_device": "/dev/nbd1", 00:06:05.474 "bdev_name": "Malloc1" 00:06:05.474 } 00:06:05.474 ]' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.474 { 00:06:05.474 "nbd_device": "/dev/nbd0", 00:06:05.474 "bdev_name": "Malloc0" 00:06:05.474 }, 00:06:05.474 { 00:06:05.474 "nbd_device": "/dev/nbd1", 00:06:05.474 "bdev_name": "Malloc1" 00:06:05.474 } 00:06:05.474 ]' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.474 /dev/nbd1' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.474 /dev/nbd1' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.474 04:09:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.474 256+0 records in 00:06:05.474 256+0 records out 00:06:05.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00906512 s, 116 MB/s 00:06:05.474 04:09:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.474 04:09:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.474 256+0 records in 00:06:05.474 256+0 records out 00:06:05.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273412 s, 38.4 MB/s 00:06:05.474 04:09:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.474 04:09:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.733 256+0 records in 00:06:05.733 256+0 records out 00:06:05.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025527 s, 41.1 MB/s 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.733 04:09:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@41 -- # break 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.993 04:09:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@41 -- # break 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.252 04:09:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@65 -- # true 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.511 04:09:18 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.511 04:09:18 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.770 04:09:19 -- event/event.sh@35 -- # sleep 3 00:06:07.034 [2024-12-06 04:09:19.512568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.034 [2024-12-06 04:09:19.583215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.034 [2024-12-06 04:09:19.583234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.303 [2024-12-06 04:09:19.656764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.303 [2024-12-06 04:09:19.656846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.838 spdk_app_start Round 1 00:06:09.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.838 04:09:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:09.838 04:09:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.838 04:09:22 -- event/event.sh@25 -- # waitforlisten 66999 /var/tmp/spdk-nbd.sock 00:06:09.838 04:09:22 -- common/autotest_common.sh@829 -- # '[' -z 66999 ']' 00:06:09.838 04:09:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.838 04:09:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.838 04:09:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.839 04:09:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.839 04:09:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.097 04:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.097 04:09:22 -- common/autotest_common.sh@862 -- # return 0 00:06:10.097 04:09:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.357 Malloc0 00:06:10.357 04:09:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.616 Malloc1 00:06:10.616 04:09:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.616 04:09:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.875 /dev/nbd0 00:06:10.875 04:09:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.875 04:09:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.875 04:09:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.875 04:09:23 -- common/autotest_common.sh@867 -- # local i 00:06:10.875 04:09:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.875 04:09:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.875 04:09:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.875 04:09:23 -- common/autotest_common.sh@871 -- # break 00:06:10.875 04:09:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.875 04:09:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.875 04:09:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.875 1+0 records in 00:06:10.875 1+0 records out 00:06:10.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247999 s, 16.5 MB/s 00:06:10.875 04:09:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.875 04:09:23 -- common/autotest_common.sh@884 -- # size=4096 00:06:10.875 04:09:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.875 04:09:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.875 04:09:23 -- common/autotest_common.sh@887 -- # return 0 00:06:10.875 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.875 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.875 04:09:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.133 /dev/nbd1 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.133 04:09:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.133 04:09:23 -- common/autotest_common.sh@867 -- # local i 00:06:11.133 04:09:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.133 04:09:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.133 04:09:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.133 04:09:23 -- common/autotest_common.sh@871 -- # break 00:06:11.133 04:09:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.133 04:09:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.133 04:09:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.133 1+0 records in 00:06:11.133 1+0 records out 00:06:11.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320062 s, 12.8 MB/s 00:06:11.133 04:09:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.133 04:09:23 -- common/autotest_common.sh@884 -- # size=4096 00:06:11.133 04:09:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.133 04:09:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.133 04:09:23 -- common/autotest_common.sh@887 -- # return 0 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.133 04:09:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.391 04:09:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.649 04:09:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.649 { 00:06:11.649 "nbd_device": "/dev/nbd0", 00:06:11.649 "bdev_name": "Malloc0" 00:06:11.649 }, 00:06:11.649 { 00:06:11.649 "nbd_device": "/dev/nbd1", 00:06:11.649 "bdev_name": "Malloc1" 00:06:11.649 } 00:06:11.649 ]' 00:06:11.649 04:09:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.649 { 00:06:11.649 "nbd_device": "/dev/nbd0", 00:06:11.649 "bdev_name": "Malloc0" 00:06:11.649 }, 00:06:11.649 { 00:06:11.649 "nbd_device": "/dev/nbd1", 00:06:11.649 "bdev_name": "Malloc1" 00:06:11.649 } 00:06:11.649 ]' 00:06:11.649 04:09:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.649 /dev/nbd1' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.649 /dev/nbd1' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.649 256+0 records in 00:06:11.649 256+0 records out 00:06:11.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667244 s, 157 MB/s 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.649 256+0 records in 00:06:11.649 256+0 records out 00:06:11.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241927 s, 43.3 MB/s 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.649 256+0 records in 00:06:11.649 256+0 records out 00:06:11.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303392 s, 34.6 MB/s 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.649 04:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@41 -- # break 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.907 04:09:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@41 -- # break 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.165 04:09:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.782 04:09:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.782 04:09:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.782 04:09:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@65 -- # true 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.782 04:09:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.782 04:09:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.039 04:09:25 -- event/event.sh@35 -- # sleep 3 00:06:13.039 [2024-12-06 04:09:25.537933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.039 [2024-12-06 04:09:25.601177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.039 [2024-12-06 04:09:25.601190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.297 [2024-12-06 04:09:25.660597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.297 [2024-12-06 04:09:25.660655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.835 spdk_app_start Round 2 00:06:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.835 04:09:28 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.835 04:09:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.835 04:09:28 -- event/event.sh@25 -- # waitforlisten 66999 /var/tmp/spdk-nbd.sock 00:06:15.835 04:09:28 -- common/autotest_common.sh@829 -- # '[' -z 66999 ']' 00:06:15.835 04:09:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.835 04:09:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.835 04:09:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.835 04:09:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.835 04:09:28 -- common/autotest_common.sh@10 -- # set +x 00:06:16.402 04:09:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.402 04:09:28 -- common/autotest_common.sh@862 -- # return 0 00:06:16.402 04:09:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.402 Malloc0 00:06:16.402 04:09:28 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.661 Malloc1 00:06:16.919 04:09:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.919 04:09:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.919 /dev/nbd0 00:06:17.178 04:09:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.178 04:09:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.178 04:09:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:17.179 04:09:29 -- common/autotest_common.sh@867 -- # local i 00:06:17.179 04:09:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.179 04:09:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.179 04:09:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:17.179 04:09:29 -- common/autotest_common.sh@871 -- # break 00:06:17.179 04:09:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.179 04:09:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.179 04:09:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.179 1+0 records in 00:06:17.179 1+0 records out 00:06:17.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407 s, 10.1 MB/s 00:06:17.179 04:09:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.179 04:09:29 -- common/autotest_common.sh@884 -- # size=4096 00:06:17.179 04:09:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.179 04:09:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.179 04:09:29 -- common/autotest_common.sh@887 -- # return 0 00:06:17.179 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.179 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.179 04:09:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.436 /dev/nbd1 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.436 04:09:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:17.436 04:09:29 -- common/autotest_common.sh@867 -- # local i 00:06:17.436 04:09:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:17.436 04:09:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:17.436 04:09:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:17.436 04:09:29 -- common/autotest_common.sh@871 -- # break 00:06:17.436 04:09:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:17.436 04:09:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:17.436 04:09:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.436 1+0 records in 00:06:17.436 1+0 records out 00:06:17.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281047 s, 14.6 MB/s 00:06:17.436 04:09:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.436 04:09:29 -- common/autotest_common.sh@884 -- # size=4096 00:06:17.436 04:09:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.436 04:09:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:17.436 04:09:29 -- common/autotest_common.sh@887 -- # return 0 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.436 04:09:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.694 { 00:06:17.694 "nbd_device": "/dev/nbd0", 00:06:17.694 "bdev_name": "Malloc0" 00:06:17.694 }, 00:06:17.694 { 00:06:17.694 "nbd_device": "/dev/nbd1", 00:06:17.694 "bdev_name": "Malloc1" 00:06:17.694 } 00:06:17.694 ]' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.694 { 00:06:17.694 "nbd_device": "/dev/nbd0", 00:06:17.694 "bdev_name": "Malloc0" 00:06:17.694 }, 00:06:17.694 { 00:06:17.694 "nbd_device": "/dev/nbd1", 00:06:17.694 "bdev_name": "Malloc1" 00:06:17.694 } 00:06:17.694 ]' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.694 /dev/nbd1' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.694 /dev/nbd1' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.694 256+0 records in 00:06:17.694 256+0 records out 00:06:17.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110745 s, 94.7 MB/s 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.694 256+0 records in 00:06:17.694 256+0 records out 00:06:17.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271321 s, 38.6 MB/s 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.694 04:09:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.953 256+0 records in 00:06:17.953 256+0 records out 00:06:17.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297801 s, 35.2 MB/s 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.953 04:09:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@41 -- # break 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.212 04:09:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@41 -- # break 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.472 04:09:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.730 04:09:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.730 04:09:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.730 04:09:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.730 04:09:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.730 04:09:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@65 -- # true 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.731 04:09:31 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.731 04:09:31 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.298 04:09:31 -- event/event.sh@35 -- # sleep 3 00:06:19.298 [2024-12-06 04:09:31.824292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.557 [2024-12-06 04:09:31.919945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.557 [2024-12-06 04:09:31.919977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.557 [2024-12-06 04:09:31.986652] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.557 [2024-12-06 04:09:31.986724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.091 04:09:34 -- event/event.sh@38 -- # waitforlisten 66999 /var/tmp/spdk-nbd.sock 00:06:22.091 04:09:34 -- common/autotest_common.sh@829 -- # '[' -z 66999 ']' 00:06:22.091 04:09:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.091 04:09:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.091 04:09:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.091 04:09:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.091 04:09:34 -- common/autotest_common.sh@10 -- # set +x 00:06:22.350 04:09:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.350 04:09:34 -- common/autotest_common.sh@862 -- # return 0 00:06:22.350 04:09:34 -- event/event.sh@39 -- # killprocess 66999 00:06:22.350 04:09:34 -- common/autotest_common.sh@936 -- # '[' -z 66999 ']' 00:06:22.350 04:09:34 -- common/autotest_common.sh@940 -- # kill -0 66999 00:06:22.350 04:09:34 -- common/autotest_common.sh@941 -- # uname 00:06:22.350 04:09:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.350 04:09:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66999 00:06:22.350 killing process with pid 66999 00:06:22.350 04:09:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.350 04:09:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.350 04:09:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66999' 00:06:22.350 04:09:34 -- common/autotest_common.sh@955 -- # kill 66999 00:06:22.350 04:09:34 -- common/autotest_common.sh@960 -- # wait 66999 00:06:22.609 spdk_app_start is called in Round 0. 00:06:22.609 Shutdown signal received, stop current app iteration 00:06:22.609 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:22.609 spdk_app_start is called in Round 1. 00:06:22.609 Shutdown signal received, stop current app iteration 00:06:22.609 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:22.609 spdk_app_start is called in Round 2. 00:06:22.609 Shutdown signal received, stop current app iteration 00:06:22.609 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:22.609 spdk_app_start is called in Round 3. 00:06:22.609 Shutdown signal received, stop current app iteration 00:06:22.609 04:09:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:22.609 04:09:35 -- event/event.sh@42 -- # return 0 00:06:22.609 00:06:22.609 real 0m19.800s 00:06:22.609 user 0m44.765s 00:06:22.609 sys 0m3.074s 00:06:22.609 04:09:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.609 ************************************ 00:06:22.609 END TEST app_repeat 00:06:22.609 ************************************ 00:06:22.609 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.609 04:09:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.609 04:09:35 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.609 04:09:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.609 04:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.609 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.609 ************************************ 00:06:22.609 START TEST cpu_locks 00:06:22.609 ************************************ 00:06:22.609 04:09:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.868 * Looking for test storage... 00:06:22.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.868 04:09:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.868 04:09:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.868 04:09:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.868 04:09:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.868 04:09:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.868 04:09:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.868 04:09:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.868 04:09:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.868 04:09:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.868 04:09:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.868 04:09:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.868 04:09:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.868 04:09:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.868 04:09:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.868 04:09:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.868 04:09:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.868 04:09:35 -- scripts/common.sh@344 -- # : 1 00:06:22.868 04:09:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.868 04:09:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.868 04:09:35 -- scripts/common.sh@364 -- # decimal 1 00:06:22.868 04:09:35 -- scripts/common.sh@352 -- # local d=1 00:06:22.868 04:09:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.868 04:09:35 -- scripts/common.sh@354 -- # echo 1 00:06:22.868 04:09:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.868 04:09:35 -- scripts/common.sh@365 -- # decimal 2 00:06:22.868 04:09:35 -- scripts/common.sh@352 -- # local d=2 00:06:22.868 04:09:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.868 04:09:35 -- scripts/common.sh@354 -- # echo 2 00:06:22.868 04:09:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.868 04:09:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.868 04:09:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.868 04:09:35 -- scripts/common.sh@367 -- # return 0 00:06:22.868 04:09:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.868 04:09:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.868 --rc genhtml_branch_coverage=1 00:06:22.868 --rc genhtml_function_coverage=1 00:06:22.868 --rc genhtml_legend=1 00:06:22.868 --rc geninfo_all_blocks=1 00:06:22.868 --rc geninfo_unexecuted_blocks=1 00:06:22.868 00:06:22.868 ' 00:06:22.868 04:09:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.868 --rc genhtml_branch_coverage=1 00:06:22.868 --rc genhtml_function_coverage=1 00:06:22.868 --rc genhtml_legend=1 00:06:22.868 --rc geninfo_all_blocks=1 00:06:22.868 --rc geninfo_unexecuted_blocks=1 00:06:22.868 00:06:22.868 ' 00:06:22.868 04:09:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.868 --rc genhtml_branch_coverage=1 00:06:22.868 --rc genhtml_function_coverage=1 00:06:22.868 --rc genhtml_legend=1 00:06:22.868 --rc geninfo_all_blocks=1 00:06:22.868 --rc geninfo_unexecuted_blocks=1 00:06:22.868 00:06:22.868 ' 00:06:22.868 04:09:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.868 --rc genhtml_branch_coverage=1 00:06:22.868 --rc genhtml_function_coverage=1 00:06:22.868 --rc genhtml_legend=1 00:06:22.868 --rc geninfo_all_blocks=1 00:06:22.868 --rc geninfo_unexecuted_blocks=1 00:06:22.869 00:06:22.869 ' 00:06:22.869 04:09:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.869 04:09:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.869 04:09:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.869 04:09:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.869 04:09:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.869 04:09:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.869 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.869 ************************************ 00:06:22.869 START TEST default_locks 00:06:22.869 ************************************ 00:06:22.869 04:09:35 -- common/autotest_common.sh@1114 -- # default_locks 00:06:22.869 04:09:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67450 00:06:22.869 04:09:35 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.869 04:09:35 -- event/cpu_locks.sh@47 -- # waitforlisten 67450 00:06:22.869 04:09:35 -- common/autotest_common.sh@829 -- # '[' -z 67450 ']' 00:06:22.869 04:09:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.869 04:09:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.869 04:09:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.869 04:09:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.869 04:09:35 -- common/autotest_common.sh@10 -- # set +x 00:06:22.869 [2024-12-06 04:09:35.410070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.869 [2024-12-06 04:09:35.410326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67450 ] 00:06:23.136 [2024-12-06 04:09:35.548028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.136 [2024-12-06 04:09:35.624290] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.136 [2024-12-06 04:09:35.624595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.082 04:09:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.082 04:09:36 -- common/autotest_common.sh@862 -- # return 0 00:06:24.082 04:09:36 -- event/cpu_locks.sh@49 -- # locks_exist 67450 00:06:24.082 04:09:36 -- event/cpu_locks.sh@22 -- # lslocks -p 67450 00:06:24.082 04:09:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.341 04:09:36 -- event/cpu_locks.sh@50 -- # killprocess 67450 00:06:24.341 04:09:36 -- common/autotest_common.sh@936 -- # '[' -z 67450 ']' 00:06:24.341 04:09:36 -- common/autotest_common.sh@940 -- # kill -0 67450 00:06:24.341 04:09:36 -- common/autotest_common.sh@941 -- # uname 00:06:24.342 04:09:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.342 04:09:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67450 00:06:24.342 killing process with pid 67450 00:06:24.342 04:09:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.342 04:09:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.342 04:09:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67450' 00:06:24.342 04:09:36 -- common/autotest_common.sh@955 -- # kill 67450 00:06:24.342 04:09:36 -- common/autotest_common.sh@960 -- # wait 67450 00:06:24.601 04:09:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67450 00:06:24.601 04:09:37 -- common/autotest_common.sh@650 -- # local es=0 00:06:24.601 04:09:37 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67450 00:06:24.601 04:09:37 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.601 04:09:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.601 04:09:37 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.601 04:09:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.601 04:09:37 -- common/autotest_common.sh@653 -- # waitforlisten 67450 00:06:24.601 04:09:37 -- common/autotest_common.sh@829 -- # '[' -z 67450 ']' 00:06:24.601 04:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.601 04:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.601 ERROR: process (pid: 67450) is no longer running 00:06:24.601 04:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.601 04:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.601 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.601 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67450) - No such process 00:06:24.601 04:09:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.601 04:09:37 -- common/autotest_common.sh@862 -- # return 1 00:06:24.601 04:09:37 -- common/autotest_common.sh@653 -- # es=1 00:06:24.601 04:09:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.601 04:09:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.601 04:09:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.601 04:09:37 -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.601 04:09:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.601 04:09:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.601 ************************************ 00:06:24.601 END TEST default_locks 00:06:24.601 ************************************ 00:06:24.601 04:09:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.601 00:06:24.601 real 0m1.784s 00:06:24.601 user 0m1.935s 00:06:24.601 sys 0m0.514s 00:06:24.601 04:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.601 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.860 04:09:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.860 04:09:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.860 04:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.860 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.860 ************************************ 00:06:24.860 START TEST default_locks_via_rpc 00:06:24.860 ************************************ 00:06:24.860 04:09:37 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:24.860 04:09:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67502 00:06:24.860 04:09:37 -- event/cpu_locks.sh@63 -- # waitforlisten 67502 00:06:24.860 04:09:37 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.860 04:09:37 -- common/autotest_common.sh@829 -- # '[' -z 67502 ']' 00:06:24.860 04:09:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.860 04:09:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.860 04:09:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.860 04:09:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.860 04:09:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.860 [2024-12-06 04:09:37.241356] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.860 [2024-12-06 04:09:37.241485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67502 ] 00:06:24.860 [2024-12-06 04:09:37.383677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.119 [2024-12-06 04:09:37.455257] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.119 [2024-12-06 04:09:37.455467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.687 04:09:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.687 04:09:38 -- common/autotest_common.sh@862 -- # return 0 00:06:25.687 04:09:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.687 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.687 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.687 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.687 04:09:38 -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.687 04:09:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.687 04:09:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.687 04:09:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.687 04:09:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.687 04:09:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.687 04:09:38 -- common/autotest_common.sh@10 -- # set +x 00:06:25.946 04:09:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.946 04:09:38 -- event/cpu_locks.sh@71 -- # locks_exist 67502 00:06:25.946 04:09:38 -- event/cpu_locks.sh@22 -- # lslocks -p 67502 00:06:25.946 04:09:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.205 04:09:38 -- event/cpu_locks.sh@73 -- # killprocess 67502 00:06:26.205 04:09:38 -- common/autotest_common.sh@936 -- # '[' -z 67502 ']' 00:06:26.205 04:09:38 -- common/autotest_common.sh@940 -- # kill -0 67502 00:06:26.205 04:09:38 -- common/autotest_common.sh@941 -- # uname 00:06:26.205 04:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.205 04:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67502 00:06:26.205 04:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.205 killing process with pid 67502 00:06:26.205 04:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.205 04:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67502' 00:06:26.205 04:09:38 -- common/autotest_common.sh@955 -- # kill 67502 00:06:26.205 04:09:38 -- common/autotest_common.sh@960 -- # wait 67502 00:06:26.774 00:06:26.774 real 0m1.895s 00:06:26.774 user 0m2.059s 00:06:26.774 sys 0m0.572s 00:06:26.774 ************************************ 00:06:26.774 04:09:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.774 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.774 END TEST default_locks_via_rpc 00:06:26.774 ************************************ 00:06:26.774 04:09:39 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.774 04:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.774 04:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.774 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.774 ************************************ 00:06:26.774 START TEST non_locking_app_on_locked_coremask 00:06:26.774 ************************************ 00:06:26.774 04:09:39 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:26.774 04:09:39 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67553 00:06:26.774 04:09:39 -- event/cpu_locks.sh@81 -- # waitforlisten 67553 /var/tmp/spdk.sock 00:06:26.774 04:09:39 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.774 04:09:39 -- common/autotest_common.sh@829 -- # '[' -z 67553 ']' 00:06:26.774 04:09:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.774 04:09:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.774 04:09:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.774 04:09:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.774 04:09:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.774 [2024-12-06 04:09:39.183994] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.774 [2024-12-06 04:09:39.184096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67553 ] 00:06:26.774 [2024-12-06 04:09:39.318410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.033 [2024-12-06 04:09:39.392102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.033 [2024-12-06 04:09:39.392259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.602 04:09:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.602 04:09:40 -- common/autotest_common.sh@862 -- # return 0 00:06:27.602 04:09:40 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67569 00:06:27.602 04:09:40 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.602 04:09:40 -- event/cpu_locks.sh@85 -- # waitforlisten 67569 /var/tmp/spdk2.sock 00:06:27.602 04:09:40 -- common/autotest_common.sh@829 -- # '[' -z 67569 ']' 00:06:27.602 04:09:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.602 04:09:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.602 04:09:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.602 04:09:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.602 04:09:40 -- common/autotest_common.sh@10 -- # set +x 00:06:27.861 [2024-12-06 04:09:40.215911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.861 [2024-12-06 04:09:40.216012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67569 ] 00:06:27.861 [2024-12-06 04:09:40.363709] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.861 [2024-12-06 04:09:40.363765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.120 [2024-12-06 04:09:40.517824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.120 [2024-12-06 04:09:40.517987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.688 04:09:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.688 04:09:41 -- common/autotest_common.sh@862 -- # return 0 00:06:28.688 04:09:41 -- event/cpu_locks.sh@87 -- # locks_exist 67553 00:06:28.688 04:09:41 -- event/cpu_locks.sh@22 -- # lslocks -p 67553 00:06:28.688 04:09:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.626 04:09:41 -- event/cpu_locks.sh@89 -- # killprocess 67553 00:06:29.626 04:09:41 -- common/autotest_common.sh@936 -- # '[' -z 67553 ']' 00:06:29.626 04:09:41 -- common/autotest_common.sh@940 -- # kill -0 67553 00:06:29.626 04:09:41 -- common/autotest_common.sh@941 -- # uname 00:06:29.626 04:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.626 04:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67553 00:06:29.626 04:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.626 04:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.626 killing process with pid 67553 00:06:29.626 04:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67553' 00:06:29.626 04:09:42 -- common/autotest_common.sh@955 -- # kill 67553 00:06:29.626 04:09:42 -- common/autotest_common.sh@960 -- # wait 67553 00:06:30.563 04:09:42 -- event/cpu_locks.sh@90 -- # killprocess 67569 00:06:30.563 04:09:42 -- common/autotest_common.sh@936 -- # '[' -z 67569 ']' 00:06:30.563 04:09:42 -- common/autotest_common.sh@940 -- # kill -0 67569 00:06:30.563 04:09:42 -- common/autotest_common.sh@941 -- # uname 00:06:30.563 04:09:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.563 04:09:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67569 00:06:30.563 04:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.563 killing process with pid 67569 00:06:30.563 04:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.563 04:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67569' 00:06:30.563 04:09:42 -- common/autotest_common.sh@955 -- # kill 67569 00:06:30.563 04:09:42 -- common/autotest_common.sh@960 -- # wait 67569 00:06:30.822 00:06:30.822 real 0m4.052s 00:06:30.822 user 0m4.505s 00:06:30.822 sys 0m1.133s 00:06:30.822 04:09:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.823 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:30.823 ************************************ 00:06:30.823 END TEST non_locking_app_on_locked_coremask 00:06:30.823 ************************************ 00:06:30.823 04:09:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.823 04:09:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.823 04:09:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.823 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:30.823 ************************************ 00:06:30.823 START TEST locking_app_on_unlocked_coremask 00:06:30.823 ************************************ 00:06:30.823 04:09:43 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:30.823 04:09:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67636 00:06:30.823 04:09:43 -- event/cpu_locks.sh@99 -- # waitforlisten 67636 /var/tmp/spdk.sock 00:06:30.823 04:09:43 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.823 04:09:43 -- common/autotest_common.sh@829 -- # '[' -z 67636 ']' 00:06:30.823 04:09:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.823 04:09:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.823 04:09:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.823 04:09:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.823 04:09:43 -- common/autotest_common.sh@10 -- # set +x 00:06:30.823 [2024-12-06 04:09:43.288432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.823 [2024-12-06 04:09:43.288541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67636 ] 00:06:31.082 [2024-12-06 04:09:43.427020] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.082 [2024-12-06 04:09:43.427086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.082 [2024-12-06 04:09:43.511083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.082 [2024-12-06 04:09:43.511287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.018 04:09:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.018 04:09:44 -- common/autotest_common.sh@862 -- # return 0 00:06:32.018 04:09:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67652 00:06:32.018 04:09:44 -- event/cpu_locks.sh@103 -- # waitforlisten 67652 /var/tmp/spdk2.sock 00:06:32.018 04:09:44 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.018 04:09:44 -- common/autotest_common.sh@829 -- # '[' -z 67652 ']' 00:06:32.018 04:09:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.018 04:09:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.018 04:09:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.018 04:09:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.018 04:09:44 -- common/autotest_common.sh@10 -- # set +x 00:06:32.018 [2024-12-06 04:09:44.353329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.018 [2024-12-06 04:09:44.353446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67652 ] 00:06:32.018 [2024-12-06 04:09:44.494970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.277 [2024-12-06 04:09:44.662595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.277 [2024-12-06 04:09:44.662751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.844 04:09:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.844 04:09:45 -- common/autotest_common.sh@862 -- # return 0 00:06:32.844 04:09:45 -- event/cpu_locks.sh@105 -- # locks_exist 67652 00:06:32.844 04:09:45 -- event/cpu_locks.sh@22 -- # lslocks -p 67652 00:06:32.844 04:09:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.779 04:09:46 -- event/cpu_locks.sh@107 -- # killprocess 67636 00:06:33.779 04:09:46 -- common/autotest_common.sh@936 -- # '[' -z 67636 ']' 00:06:33.779 04:09:46 -- common/autotest_common.sh@940 -- # kill -0 67636 00:06:33.779 04:09:46 -- common/autotest_common.sh@941 -- # uname 00:06:33.779 04:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.779 04:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67636 00:06:33.779 04:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.779 04:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.779 killing process with pid 67636 00:06:33.779 04:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67636' 00:06:33.779 04:09:46 -- common/autotest_common.sh@955 -- # kill 67636 00:06:33.779 04:09:46 -- common/autotest_common.sh@960 -- # wait 67636 00:06:34.715 04:09:47 -- event/cpu_locks.sh@108 -- # killprocess 67652 00:06:34.715 04:09:47 -- common/autotest_common.sh@936 -- # '[' -z 67652 ']' 00:06:34.715 04:09:47 -- common/autotest_common.sh@940 -- # kill -0 67652 00:06:34.715 04:09:47 -- common/autotest_common.sh@941 -- # uname 00:06:34.715 04:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.715 04:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67652 00:06:34.715 killing process with pid 67652 00:06:34.715 04:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.715 04:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.715 04:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67652' 00:06:34.715 04:09:47 -- common/autotest_common.sh@955 -- # kill 67652 00:06:34.715 04:09:47 -- common/autotest_common.sh@960 -- # wait 67652 00:06:34.974 ************************************ 00:06:34.974 END TEST locking_app_on_unlocked_coremask 00:06:34.974 ************************************ 00:06:34.974 00:06:34.974 real 0m4.291s 00:06:34.974 user 0m4.795s 00:06:34.974 sys 0m1.173s 00:06:34.974 04:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.974 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:35.233 04:09:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.233 04:09:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.233 04:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.233 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:35.233 ************************************ 00:06:35.233 START TEST locking_app_on_locked_coremask 00:06:35.233 ************************************ 00:06:35.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.233 04:09:47 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:35.233 04:09:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67721 00:06:35.233 04:09:47 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.233 04:09:47 -- event/cpu_locks.sh@116 -- # waitforlisten 67721 /var/tmp/spdk.sock 00:06:35.233 04:09:47 -- common/autotest_common.sh@829 -- # '[' -z 67721 ']' 00:06:35.233 04:09:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.233 04:09:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.233 04:09:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.233 04:09:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.233 04:09:47 -- common/autotest_common.sh@10 -- # set +x 00:06:35.233 [2024-12-06 04:09:47.634427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.233 [2024-12-06 04:09:47.634533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67721 ] 00:06:35.233 [2024-12-06 04:09:47.774977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.493 [2024-12-06 04:09:47.848036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.493 [2024-12-06 04:09:47.848223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.431 04:09:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.431 04:09:48 -- common/autotest_common.sh@862 -- # return 0 00:06:36.431 04:09:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67737 00:06:36.431 04:09:48 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.431 04:09:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67737 /var/tmp/spdk2.sock 00:06:36.431 04:09:48 -- common/autotest_common.sh@650 -- # local es=0 00:06:36.431 04:09:48 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67737 /var/tmp/spdk2.sock 00:06:36.431 04:09:48 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:36.431 04:09:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.431 04:09:48 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:36.431 04:09:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.431 04:09:48 -- common/autotest_common.sh@653 -- # waitforlisten 67737 /var/tmp/spdk2.sock 00:06:36.431 04:09:48 -- common/autotest_common.sh@829 -- # '[' -z 67737 ']' 00:06:36.431 04:09:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.431 04:09:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.431 04:09:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.431 04:09:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.431 04:09:48 -- common/autotest_common.sh@10 -- # set +x 00:06:36.431 [2024-12-06 04:09:48.705672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.431 [2024-12-06 04:09:48.706216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67737 ] 00:06:36.431 [2024-12-06 04:09:48.849751] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67721 has claimed it. 00:06:36.431 [2024-12-06 04:09:48.849828] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.998 ERROR: process (pid: 67737) is no longer running 00:06:36.998 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67737) - No such process 00:06:36.998 04:09:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.998 04:09:49 -- common/autotest_common.sh@862 -- # return 1 00:06:36.998 04:09:49 -- common/autotest_common.sh@653 -- # es=1 00:06:36.998 04:09:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.998 04:09:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.998 04:09:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.998 04:09:49 -- event/cpu_locks.sh@122 -- # locks_exist 67721 00:06:36.998 04:09:49 -- event/cpu_locks.sh@22 -- # lslocks -p 67721 00:06:36.998 04:09:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.566 04:09:49 -- event/cpu_locks.sh@124 -- # killprocess 67721 00:06:37.566 04:09:49 -- common/autotest_common.sh@936 -- # '[' -z 67721 ']' 00:06:37.566 04:09:49 -- common/autotest_common.sh@940 -- # kill -0 67721 00:06:37.566 04:09:49 -- common/autotest_common.sh@941 -- # uname 00:06:37.566 04:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.566 04:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67721 00:06:37.566 killing process with pid 67721 00:06:37.566 04:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.566 04:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.566 04:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67721' 00:06:37.566 04:09:49 -- common/autotest_common.sh@955 -- # kill 67721 00:06:37.566 04:09:49 -- common/autotest_common.sh@960 -- # wait 67721 00:06:37.825 00:06:37.825 real 0m2.709s 00:06:37.825 user 0m3.150s 00:06:37.825 sys 0m0.657s 00:06:37.825 04:09:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.825 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.825 ************************************ 00:06:37.825 END TEST locking_app_on_locked_coremask 00:06:37.825 ************************************ 00:06:37.825 04:09:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.825 04:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:37.825 04:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.825 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.825 ************************************ 00:06:37.825 START TEST locking_overlapped_coremask 00:06:37.825 ************************************ 00:06:37.825 04:09:50 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:37.825 04:09:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67788 00:06:37.825 04:09:50 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.825 04:09:50 -- event/cpu_locks.sh@133 -- # waitforlisten 67788 /var/tmp/spdk.sock 00:06:37.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.825 04:09:50 -- common/autotest_common.sh@829 -- # '[' -z 67788 ']' 00:06:37.825 04:09:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.825 04:09:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.825 04:09:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.825 04:09:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.825 04:09:50 -- common/autotest_common.sh@10 -- # set +x 00:06:38.084 [2024-12-06 04:09:50.392942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.084 [2024-12-06 04:09:50.393711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67788 ] 00:06:38.084 [2024-12-06 04:09:50.535791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.084 [2024-12-06 04:09:50.612737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.084 [2024-12-06 04:09:50.613369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.084 [2024-12-06 04:09:50.613278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.084 [2024-12-06 04:09:50.613363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.035 04:09:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.035 04:09:51 -- common/autotest_common.sh@862 -- # return 0 00:06:39.035 04:09:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.035 04:09:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67807 00:06:39.035 04:09:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67807 /var/tmp/spdk2.sock 00:06:39.035 04:09:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:39.035 04:09:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67807 /var/tmp/spdk2.sock 00:06:39.035 04:09:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:39.035 04:09:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.035 04:09:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:39.035 04:09:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.035 04:09:51 -- common/autotest_common.sh@653 -- # waitforlisten 67807 /var/tmp/spdk2.sock 00:06:39.035 04:09:51 -- common/autotest_common.sh@829 -- # '[' -z 67807 ']' 00:06:39.035 04:09:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.035 04:09:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.035 04:09:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.035 04:09:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.035 04:09:51 -- common/autotest_common.sh@10 -- # set +x 00:06:39.035 [2024-12-06 04:09:51.409568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.035 [2024-12-06 04:09:51.409772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67807 ] 00:06:39.035 [2024-12-06 04:09:51.550956] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67788 has claimed it. 00:06:39.035 [2024-12-06 04:09:51.551015] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.603 ERROR: process (pid: 67807) is no longer running 00:06:39.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67807) - No such process 00:06:39.603 04:09:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.603 04:09:52 -- common/autotest_common.sh@862 -- # return 1 00:06:39.603 04:09:52 -- common/autotest_common.sh@653 -- # es=1 00:06:39.603 04:09:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.603 04:09:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.603 04:09:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.603 04:09:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.603 04:09:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.603 04:09:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.603 04:09:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.603 04:09:52 -- event/cpu_locks.sh@141 -- # killprocess 67788 00:06:39.603 04:09:52 -- common/autotest_common.sh@936 -- # '[' -z 67788 ']' 00:06:39.603 04:09:52 -- common/autotest_common.sh@940 -- # kill -0 67788 00:06:39.603 04:09:52 -- common/autotest_common.sh@941 -- # uname 00:06:39.603 04:09:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.603 04:09:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67788 00:06:39.862 04:09:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.862 04:09:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.862 04:09:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67788' 00:06:39.862 killing process with pid 67788 00:06:39.862 04:09:52 -- common/autotest_common.sh@955 -- # kill 67788 00:06:39.862 04:09:52 -- common/autotest_common.sh@960 -- # wait 67788 00:06:40.121 00:06:40.121 real 0m2.242s 00:06:40.121 user 0m6.313s 00:06:40.121 sys 0m0.442s 00:06:40.121 ************************************ 00:06:40.121 END TEST locking_overlapped_coremask 00:06:40.121 ************************************ 00:06:40.121 04:09:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.121 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:06:40.121 04:09:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:40.121 04:09:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:40.121 04:09:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.121 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:06:40.121 ************************************ 00:06:40.121 START TEST locking_overlapped_coremask_via_rpc 00:06:40.121 ************************************ 00:06:40.121 04:09:52 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:40.121 04:09:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67847 00:06:40.121 04:09:52 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:40.121 04:09:52 -- event/cpu_locks.sh@149 -- # waitforlisten 67847 /var/tmp/spdk.sock 00:06:40.121 04:09:52 -- common/autotest_common.sh@829 -- # '[' -z 67847 ']' 00:06:40.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.121 04:09:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.121 04:09:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.121 04:09:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.121 04:09:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.121 04:09:52 -- common/autotest_common.sh@10 -- # set +x 00:06:40.380 [2024-12-06 04:09:52.684949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.380 [2024-12-06 04:09:52.685058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67847 ] 00:06:40.380 [2024-12-06 04:09:52.819339] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.380 [2024-12-06 04:09:52.819381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.380 [2024-12-06 04:09:52.890280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.380 [2024-12-06 04:09:52.890859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.380 [2024-12-06 04:09:52.890956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.380 [2024-12-06 04:09:52.890962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.316 04:09:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.316 04:09:53 -- common/autotest_common.sh@862 -- # return 0 00:06:41.316 04:09:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67865 00:06:41.316 04:09:53 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.316 04:09:53 -- event/cpu_locks.sh@153 -- # waitforlisten 67865 /var/tmp/spdk2.sock 00:06:41.316 04:09:53 -- common/autotest_common.sh@829 -- # '[' -z 67865 ']' 00:06:41.316 04:09:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.316 04:09:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.316 04:09:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.316 04:09:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.316 04:09:53 -- common/autotest_common.sh@10 -- # set +x 00:06:41.316 [2024-12-06 04:09:53.753844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.316 [2024-12-06 04:09:53.754137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67865 ] 00:06:41.575 [2024-12-06 04:09:53.899601] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.575 [2024-12-06 04:09:53.899656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.575 [2024-12-06 04:09:54.062966] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.575 [2024-12-06 04:09:54.063256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.575 [2024-12-06 04:09:54.063383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.575 [2024-12-06 04:09:54.063407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.511 04:09:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.511 04:09:54 -- common/autotest_common.sh@862 -- # return 0 00:06:42.511 04:09:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.511 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.511 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.511 04:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.511 04:09:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.511 04:09:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:42.511 04:09:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.511 04:09:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.511 04:09:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.511 04:09:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.511 04:09:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.511 04:09:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.511 04:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.511 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.511 [2024-12-06 04:09:54.732529] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67847 has claimed it. 00:06:42.511 request: 00:06:42.511 { 00:06:42.511 "method": "framework_enable_cpumask_locks", 00:06:42.511 "req_id": 1 00:06:42.511 } 00:06:42.511 Got JSON-RPC error response 00:06:42.511 response: 00:06:42.511 { 00:06:42.511 "code": -32603, 00:06:42.511 "message": "Failed to claim CPU core: 2" 00:06:42.511 } 00:06:42.511 04:09:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.511 04:09:54 -- common/autotest_common.sh@653 -- # es=1 00:06:42.511 04:09:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.511 04:09:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.511 04:09:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.511 04:09:54 -- event/cpu_locks.sh@158 -- # waitforlisten 67847 /var/tmp/spdk.sock 00:06:42.511 04:09:54 -- common/autotest_common.sh@829 -- # '[' -z 67847 ']' 00:06:42.511 04:09:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.511 04:09:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.511 04:09:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.511 04:09:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.511 04:09:54 -- common/autotest_common.sh@10 -- # set +x 00:06:42.511 04:09:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.511 04:09:55 -- common/autotest_common.sh@862 -- # return 0 00:06:42.511 04:09:55 -- event/cpu_locks.sh@159 -- # waitforlisten 67865 /var/tmp/spdk2.sock 00:06:42.511 04:09:55 -- common/autotest_common.sh@829 -- # '[' -z 67865 ']' 00:06:42.511 04:09:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.511 04:09:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.511 04:09:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.511 04:09:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.511 04:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.774 04:09:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.774 04:09:55 -- common/autotest_common.sh@862 -- # return 0 00:06:42.774 04:09:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.774 04:09:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.774 04:09:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.775 04:09:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.775 00:06:42.775 real 0m2.668s 00:06:42.775 user 0m1.383s 00:06:42.775 sys 0m0.207s 00:06:42.775 04:09:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.775 04:09:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.775 ************************************ 00:06:42.775 END TEST locking_overlapped_coremask_via_rpc 00:06:42.775 ************************************ 00:06:43.038 04:09:55 -- event/cpu_locks.sh@174 -- # cleanup 00:06:43.038 04:09:55 -- event/cpu_locks.sh@15 -- # [[ -z 67847 ]] 00:06:43.038 04:09:55 -- event/cpu_locks.sh@15 -- # killprocess 67847 00:06:43.039 04:09:55 -- common/autotest_common.sh@936 -- # '[' -z 67847 ']' 00:06:43.039 04:09:55 -- common/autotest_common.sh@940 -- # kill -0 67847 00:06:43.039 04:09:55 -- common/autotest_common.sh@941 -- # uname 00:06:43.039 04:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.039 04:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67847 00:06:43.039 killing process with pid 67847 00:06:43.039 04:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:43.039 04:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:43.039 04:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67847' 00:06:43.039 04:09:55 -- common/autotest_common.sh@955 -- # kill 67847 00:06:43.039 04:09:55 -- common/autotest_common.sh@960 -- # wait 67847 00:06:43.296 04:09:55 -- event/cpu_locks.sh@16 -- # [[ -z 67865 ]] 00:06:43.296 04:09:55 -- event/cpu_locks.sh@16 -- # killprocess 67865 00:06:43.296 04:09:55 -- common/autotest_common.sh@936 -- # '[' -z 67865 ']' 00:06:43.296 04:09:55 -- common/autotest_common.sh@940 -- # kill -0 67865 00:06:43.296 04:09:55 -- common/autotest_common.sh@941 -- # uname 00:06:43.296 04:09:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:43.296 04:09:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67865 00:06:43.296 killing process with pid 67865 00:06:43.296 04:09:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:43.296 04:09:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:43.296 04:09:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67865' 00:06:43.296 04:09:55 -- common/autotest_common.sh@955 -- # kill 67865 00:06:43.296 04:09:55 -- common/autotest_common.sh@960 -- # wait 67865 00:06:43.860 04:09:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.860 04:09:56 -- event/cpu_locks.sh@1 -- # cleanup 00:06:43.860 04:09:56 -- event/cpu_locks.sh@15 -- # [[ -z 67847 ]] 00:06:43.860 04:09:56 -- event/cpu_locks.sh@15 -- # killprocess 67847 00:06:43.860 04:09:56 -- common/autotest_common.sh@936 -- # '[' -z 67847 ']' 00:06:43.860 04:09:56 -- common/autotest_common.sh@940 -- # kill -0 67847 00:06:43.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67847) - No such process 00:06:43.860 04:09:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67847 is not found' 00:06:43.860 Process with pid 67847 is not found 00:06:43.860 Process with pid 67865 is not found 00:06:43.860 04:09:56 -- event/cpu_locks.sh@16 -- # [[ -z 67865 ]] 00:06:43.860 04:09:56 -- event/cpu_locks.sh@16 -- # killprocess 67865 00:06:43.860 04:09:56 -- common/autotest_common.sh@936 -- # '[' -z 67865 ']' 00:06:43.860 04:09:56 -- common/autotest_common.sh@940 -- # kill -0 67865 00:06:43.860 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67865) - No such process 00:06:43.860 04:09:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67865 is not found' 00:06:43.860 04:09:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:43.860 00:06:43.860 real 0m21.260s 00:06:43.860 user 0m37.563s 00:06:43.860 sys 0m5.624s 00:06:43.860 04:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.860 04:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 ************************************ 00:06:43.860 END TEST cpu_locks 00:06:43.860 ************************************ 00:06:44.119 ************************************ 00:06:44.119 END TEST event 00:06:44.119 ************************************ 00:06:44.119 00:06:44.119 real 0m48.944s 00:06:44.119 user 1m34.923s 00:06:44.119 sys 0m9.617s 00:06:44.119 04:09:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.119 04:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:44.119 04:09:56 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.119 04:09:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.119 04:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.119 04:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:44.120 ************************************ 00:06:44.120 START TEST thread 00:06:44.120 ************************************ 00:06:44.120 04:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.120 * Looking for test storage... 00:06:44.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:44.120 04:09:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:44.120 04:09:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:44.120 04:09:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:44.120 04:09:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:44.120 04:09:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:44.120 04:09:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:44.120 04:09:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:44.120 04:09:56 -- scripts/common.sh@335 -- # IFS=.-: 00:06:44.120 04:09:56 -- scripts/common.sh@335 -- # read -ra ver1 00:06:44.120 04:09:56 -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.120 04:09:56 -- scripts/common.sh@336 -- # read -ra ver2 00:06:44.120 04:09:56 -- scripts/common.sh@337 -- # local 'op=<' 00:06:44.120 04:09:56 -- scripts/common.sh@339 -- # ver1_l=2 00:06:44.120 04:09:56 -- scripts/common.sh@340 -- # ver2_l=1 00:06:44.120 04:09:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:44.120 04:09:56 -- scripts/common.sh@343 -- # case "$op" in 00:06:44.120 04:09:56 -- scripts/common.sh@344 -- # : 1 00:06:44.120 04:09:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:44.378 04:09:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.378 04:09:56 -- scripts/common.sh@364 -- # decimal 1 00:06:44.378 04:09:56 -- scripts/common.sh@352 -- # local d=1 00:06:44.378 04:09:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.378 04:09:56 -- scripts/common.sh@354 -- # echo 1 00:06:44.378 04:09:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:44.378 04:09:56 -- scripts/common.sh@365 -- # decimal 2 00:06:44.378 04:09:56 -- scripts/common.sh@352 -- # local d=2 00:06:44.378 04:09:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.378 04:09:56 -- scripts/common.sh@354 -- # echo 2 00:06:44.378 04:09:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:44.378 04:09:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:44.378 04:09:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:44.378 04:09:56 -- scripts/common.sh@367 -- # return 0 00:06:44.378 04:09:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.378 04:09:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:44.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.378 --rc genhtml_branch_coverage=1 00:06:44.378 --rc genhtml_function_coverage=1 00:06:44.378 --rc genhtml_legend=1 00:06:44.378 --rc geninfo_all_blocks=1 00:06:44.378 --rc geninfo_unexecuted_blocks=1 00:06:44.378 00:06:44.378 ' 00:06:44.378 04:09:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:44.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.378 --rc genhtml_branch_coverage=1 00:06:44.378 --rc genhtml_function_coverage=1 00:06:44.378 --rc genhtml_legend=1 00:06:44.378 --rc geninfo_all_blocks=1 00:06:44.378 --rc geninfo_unexecuted_blocks=1 00:06:44.378 00:06:44.378 ' 00:06:44.378 04:09:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:44.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.378 --rc genhtml_branch_coverage=1 00:06:44.378 --rc genhtml_function_coverage=1 00:06:44.378 --rc genhtml_legend=1 00:06:44.378 --rc geninfo_all_blocks=1 00:06:44.378 --rc geninfo_unexecuted_blocks=1 00:06:44.378 00:06:44.378 ' 00:06:44.378 04:09:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:44.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.378 --rc genhtml_branch_coverage=1 00:06:44.378 --rc genhtml_function_coverage=1 00:06:44.378 --rc genhtml_legend=1 00:06:44.378 --rc geninfo_all_blocks=1 00:06:44.378 --rc geninfo_unexecuted_blocks=1 00:06:44.378 00:06:44.378 ' 00:06:44.378 04:09:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.378 04:09:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:44.378 04:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.378 04:09:56 -- common/autotest_common.sh@10 -- # set +x 00:06:44.378 ************************************ 00:06:44.378 START TEST thread_poller_perf 00:06:44.378 ************************************ 00:06:44.378 04:09:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.379 [2024-12-06 04:09:56.723779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.379 [2024-12-06 04:09:56.724083] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68000 ] 00:06:44.379 [2024-12-06 04:09:56.863707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.637 [2024-12-06 04:09:56.951819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.637 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.572 [2024-12-06T04:09:58.137Z] ====================================== 00:06:45.572 [2024-12-06T04:09:58.137Z] busy:2215112128 (cyc) 00:06:45.572 [2024-12-06T04:09:58.137Z] total_run_count: 304000 00:06:45.572 [2024-12-06T04:09:58.137Z] tsc_hz: 2200000000 (cyc) 00:06:45.572 [2024-12-06T04:09:58.137Z] ====================================== 00:06:45.572 [2024-12-06T04:09:58.137Z] poller_cost: 7286 (cyc), 3311 (nsec) 00:06:45.572 00:06:45.572 real 0m1.326s 00:06:45.572 user 0m1.154s 00:06:45.572 sys 0m0.062s 00:06:45.572 04:09:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.572 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.572 ************************************ 00:06:45.572 END TEST thread_poller_perf 00:06:45.572 ************************************ 00:06:45.572 04:09:58 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.572 04:09:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:45.572 04:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.572 04:09:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.572 ************************************ 00:06:45.572 START TEST thread_poller_perf 00:06:45.572 ************************************ 00:06:45.572 04:09:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.572 [2024-12-06 04:09:58.104465] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.572 [2024-12-06 04:09:58.104599] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68030 ] 00:06:45.831 [2024-12-06 04:09:58.242644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.831 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:45.831 [2024-12-06 04:09:58.313755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.210 [2024-12-06T04:09:59.775Z] ====================================== 00:06:47.210 [2024-12-06T04:09:59.775Z] busy:2203039900 (cyc) 00:06:47.210 [2024-12-06T04:09:59.775Z] total_run_count: 4393000 00:06:47.210 [2024-12-06T04:09:59.775Z] tsc_hz: 2200000000 (cyc) 00:06:47.210 [2024-12-06T04:09:59.775Z] ====================================== 00:06:47.210 [2024-12-06T04:09:59.775Z] poller_cost: 501 (cyc), 227 (nsec) 00:06:47.210 00:06:47.210 real 0m1.299s 00:06:47.210 user 0m1.124s 00:06:47.210 sys 0m0.067s 00:06:47.210 04:09:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.210 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:47.211 ************************************ 00:06:47.211 END TEST thread_poller_perf 00:06:47.211 ************************************ 00:06:47.211 04:09:59 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.211 ************************************ 00:06:47.211 END TEST thread 00:06:47.211 ************************************ 00:06:47.211 00:06:47.211 real 0m2.924s 00:06:47.211 user 0m2.431s 00:06:47.211 sys 0m0.272s 00:06:47.211 04:09:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.211 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:47.211 04:09:59 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:47.211 04:09:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.211 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:47.211 ************************************ 00:06:47.211 START TEST accel 00:06:47.211 ************************************ 00:06:47.211 04:09:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:47.211 * Looking for test storage... 00:06:47.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:47.211 04:09:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.211 04:09:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.211 04:09:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.211 04:09:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.211 04:09:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.211 04:09:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.211 04:09:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.211 04:09:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.211 04:09:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.211 04:09:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.211 04:09:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.211 04:09:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.211 04:09:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.211 04:09:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.211 04:09:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.211 04:09:59 -- scripts/common.sh@344 -- # : 1 00:06:47.211 04:09:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.211 04:09:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.211 04:09:59 -- scripts/common.sh@364 -- # decimal 1 00:06:47.211 04:09:59 -- scripts/common.sh@352 -- # local d=1 00:06:47.211 04:09:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.211 04:09:59 -- scripts/common.sh@354 -- # echo 1 00:06:47.211 04:09:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.211 04:09:59 -- scripts/common.sh@365 -- # decimal 2 00:06:47.211 04:09:59 -- scripts/common.sh@352 -- # local d=2 00:06:47.211 04:09:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.211 04:09:59 -- scripts/common.sh@354 -- # echo 2 00:06:47.211 04:09:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.211 04:09:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.211 04:09:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.211 04:09:59 -- scripts/common.sh@367 -- # return 0 00:06:47.211 04:09:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.211 --rc genhtml_branch_coverage=1 00:06:47.211 --rc genhtml_function_coverage=1 00:06:47.211 --rc genhtml_legend=1 00:06:47.211 --rc geninfo_all_blocks=1 00:06:47.211 --rc geninfo_unexecuted_blocks=1 00:06:47.211 00:06:47.211 ' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.211 --rc genhtml_branch_coverage=1 00:06:47.211 --rc genhtml_function_coverage=1 00:06:47.211 --rc genhtml_legend=1 00:06:47.211 --rc geninfo_all_blocks=1 00:06:47.211 --rc geninfo_unexecuted_blocks=1 00:06:47.211 00:06:47.211 ' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.211 --rc genhtml_branch_coverage=1 00:06:47.211 --rc genhtml_function_coverage=1 00:06:47.211 --rc genhtml_legend=1 00:06:47.211 --rc geninfo_all_blocks=1 00:06:47.211 --rc geninfo_unexecuted_blocks=1 00:06:47.211 00:06:47.211 ' 00:06:47.211 04:09:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.211 --rc genhtml_branch_coverage=1 00:06:47.211 --rc genhtml_function_coverage=1 00:06:47.211 --rc genhtml_legend=1 00:06:47.211 --rc geninfo_all_blocks=1 00:06:47.211 --rc geninfo_unexecuted_blocks=1 00:06:47.211 00:06:47.211 ' 00:06:47.211 04:09:59 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:47.211 04:09:59 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:47.211 04:09:59 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.211 04:09:59 -- accel/accel.sh@59 -- # spdk_tgt_pid=68118 00:06:47.211 04:09:59 -- accel/accel.sh@60 -- # waitforlisten 68118 00:06:47.211 04:09:59 -- common/autotest_common.sh@829 -- # '[' -z 68118 ']' 00:06:47.211 04:09:59 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:47.211 04:09:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.211 04:09:59 -- accel/accel.sh@58 -- # build_accel_config 00:06:47.211 04:09:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.211 04:09:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.211 04:09:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.211 04:09:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.211 04:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.211 04:09:59 -- common/autotest_common.sh@10 -- # set +x 00:06:47.211 04:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.211 04:09:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.211 04:09:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.211 04:09:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.211 04:09:59 -- accel/accel.sh@42 -- # jq -r . 00:06:47.211 [2024-12-06 04:09:59.722439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.211 [2024-12-06 04:09:59.722555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68118 ] 00:06:47.471 [2024-12-06 04:09:59.864004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.471 [2024-12-06 04:09:59.935457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.471 [2024-12-06 04:09:59.935652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.410 04:10:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.410 04:10:00 -- common/autotest_common.sh@862 -- # return 0 00:06:48.410 04:10:00 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:48.410 04:10:00 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:48.410 04:10:00 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:48.410 04:10:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.410 04:10:00 -- common/autotest_common.sh@10 -- # set +x 00:06:48.410 04:10:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # IFS== 00:06:48.410 04:10:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:48.410 04:10:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:48.410 04:10:00 -- accel/accel.sh@67 -- # killprocess 68118 00:06:48.410 04:10:00 -- common/autotest_common.sh@936 -- # '[' -z 68118 ']' 00:06:48.410 04:10:00 -- common/autotest_common.sh@940 -- # kill -0 68118 00:06:48.410 04:10:00 -- common/autotest_common.sh@941 -- # uname 00:06:48.410 04:10:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.410 04:10:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68118 00:06:48.410 04:10:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.410 04:10:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.410 04:10:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68118' 00:06:48.410 killing process with pid 68118 00:06:48.410 04:10:00 -- common/autotest_common.sh@955 -- # kill 68118 00:06:48.410 04:10:00 -- common/autotest_common.sh@960 -- # wait 68118 00:06:48.979 04:10:01 -- accel/accel.sh@68 -- # trap - ERR 00:06:48.979 04:10:01 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:48.979 04:10:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:48.979 04:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.979 04:10:01 -- common/autotest_common.sh@10 -- # set +x 00:06:48.979 04:10:01 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:48.979 04:10:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:48.979 04:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.979 04:10:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.979 04:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.979 04:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.979 04:10:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.979 04:10:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.979 04:10:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.979 04:10:01 -- accel/accel.sh@42 -- # jq -r . 00:06:48.979 04:10:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.979 04:10:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.239 04:10:01 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:49.239 04:10:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:49.239 04:10:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.239 04:10:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.239 ************************************ 00:06:49.239 START TEST accel_missing_filename 00:06:49.239 ************************************ 00:06:49.239 04:10:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:49.239 04:10:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:49.239 04:10:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:49.239 04:10:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:49.239 04:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.239 04:10:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:49.239 04:10:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.239 04:10:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:49.239 04:10:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:49.239 04:10:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.239 04:10:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.239 04:10:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.239 04:10:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.239 04:10:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.239 04:10:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.239 04:10:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.239 04:10:01 -- accel/accel.sh@42 -- # jq -r . 00:06:49.239 [2024-12-06 04:10:01.592696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.239 [2024-12-06 04:10:01.592812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68171 ] 00:06:49.239 [2024-12-06 04:10:01.726249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.498 [2024-12-06 04:10:01.857438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.498 [2024-12-06 04:10:01.944052] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.757 [2024-12-06 04:10:02.071676] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:49.757 A filename is required. 00:06:49.757 04:10:02 -- common/autotest_common.sh@653 -- # es=234 00:06:49.757 04:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.757 04:10:02 -- common/autotest_common.sh@662 -- # es=106 00:06:49.757 04:10:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:49.757 04:10:02 -- common/autotest_common.sh@670 -- # es=1 00:06:49.757 04:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.757 00:06:49.757 real 0m0.621s 00:06:49.757 user 0m0.401s 00:06:49.757 sys 0m0.166s 00:06:49.757 04:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.757 ************************************ 00:06:49.757 END TEST accel_missing_filename 00:06:49.757 ************************************ 00:06:49.757 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.757 04:10:02 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.757 04:10:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:49.757 04:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.757 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.757 ************************************ 00:06:49.757 START TEST accel_compress_verify 00:06:49.757 ************************************ 00:06:49.757 04:10:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.757 04:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:49.757 04:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.757 04:10:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:49.757 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.757 04:10:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:49.757 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.757 04:10:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.757 04:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:49.757 04:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.757 04:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.757 04:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.757 04:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.757 04:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.757 04:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.757 04:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.757 04:10:02 -- accel/accel.sh@42 -- # jq -r . 00:06:49.757 [2024-12-06 04:10:02.263463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.757 [2024-12-06 04:10:02.263800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68195 ] 00:06:50.016 [2024-12-06 04:10:02.402257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.016 [2024-12-06 04:10:02.526955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.275 [2024-12-06 04:10:02.613852] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.275 [2024-12-06 04:10:02.740371] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:50.534 00:06:50.534 Compression does not support the verify option, aborting. 00:06:50.534 04:10:02 -- common/autotest_common.sh@653 -- # es=161 00:06:50.534 04:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.534 ************************************ 00:06:50.534 END TEST accel_compress_verify 00:06:50.534 ************************************ 00:06:50.534 04:10:02 -- common/autotest_common.sh@662 -- # es=33 00:06:50.534 04:10:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.534 04:10:02 -- common/autotest_common.sh@670 -- # es=1 00:06:50.534 04:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.534 00:06:50.534 real 0m0.610s 00:06:50.534 user 0m0.379s 00:06:50.534 sys 0m0.172s 00:06:50.534 04:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.534 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 04:10:02 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.534 04:10:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.534 04:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.534 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 ************************************ 00:06:50.534 START TEST accel_wrong_workload 00:06:50.534 ************************************ 00:06:50.534 04:10:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:50.534 04:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.534 04:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.534 04:10:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.534 04:10:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:50.534 04:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.534 04:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.534 04:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.534 04:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.534 04:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.534 04:10:02 -- accel/accel.sh@42 -- # jq -r . 00:06:50.534 Unsupported workload type: foobar 00:06:50.534 [2024-12-06 04:10:02.925870] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.534 accel_perf options: 00:06:50.534 [-h help message] 00:06:50.534 [-q queue depth per core] 00:06:50.534 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.534 [-T number of threads per core 00:06:50.534 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.534 [-t time in seconds] 00:06:50.534 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.534 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.534 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.534 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.534 [-S for crc32c workload, use this seed value (default 0) 00:06:50.534 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.534 [-f for fill workload, use this BYTE value (default 255) 00:06:50.534 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.534 [-y verify result if this switch is on] 00:06:50.534 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.534 Can be used to spread operations across a wider range of memory. 00:06:50.534 04:10:02 -- common/autotest_common.sh@653 -- # es=1 00:06:50.534 04:10:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.534 04:10:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.534 04:10:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.534 00:06:50.534 real 0m0.035s 00:06:50.534 user 0m0.017s 00:06:50.534 sys 0m0.017s 00:06:50.534 ************************************ 00:06:50.534 END TEST accel_wrong_workload 00:06:50.534 ************************************ 00:06:50.534 04:10:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.534 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 04:10:02 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.534 04:10:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.534 04:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.534 04:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.534 ************************************ 00:06:50.534 START TEST accel_negative_buffers 00:06:50.534 ************************************ 00:06:50.534 04:10:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.534 04:10:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.534 04:10:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:50.534 04:10:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.534 04:10:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.534 04:10:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:50.534 04:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:50.534 04:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.534 04:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.534 04:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.534 04:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.534 04:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.534 04:10:02 -- accel/accel.sh@42 -- # jq -r . 00:06:50.534 -x option must be non-negative. 00:06:50.534 [2024-12-06 04:10:03.005526] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:50.534 accel_perf options: 00:06:50.534 [-h help message] 00:06:50.534 [-q queue depth per core] 00:06:50.534 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.534 [-T number of threads per core 00:06:50.534 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.534 [-t time in seconds] 00:06:50.534 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.534 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.534 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.534 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.534 [-S for crc32c workload, use this seed value (default 0) 00:06:50.534 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.534 [-f for fill workload, use this BYTE value (default 255) 00:06:50.534 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.534 [-y verify result if this switch is on] 00:06:50.535 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.535 Can be used to spread operations across a wider range of memory. 00:06:50.535 04:10:03 -- common/autotest_common.sh@653 -- # es=1 00:06:50.535 04:10:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.535 04:10:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.535 04:10:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.535 00:06:50.535 real 0m0.031s 00:06:50.535 user 0m0.019s 00:06:50.535 sys 0m0.012s 00:06:50.535 04:10:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.535 ************************************ 00:06:50.535 END TEST accel_negative_buffers 00:06:50.535 ************************************ 00:06:50.535 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.535 04:10:03 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:50.535 04:10:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.535 04:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.535 04:10:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.535 ************************************ 00:06:50.535 START TEST accel_crc32c 00:06:50.535 ************************************ 00:06:50.535 04:10:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:50.535 04:10:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.535 04:10:03 -- accel/accel.sh@17 -- # local accel_module 00:06:50.535 04:10:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:50.535 04:10:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:50.535 04:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.535 04:10:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.535 04:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.535 04:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.535 04:10:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.535 04:10:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.535 04:10:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.535 04:10:03 -- accel/accel.sh@42 -- # jq -r . 00:06:50.535 [2024-12-06 04:10:03.087971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.535 [2024-12-06 04:10:03.088062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68259 ] 00:06:50.793 [2024-12-06 04:10:03.225623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.052 [2024-12-06 04:10:03.367003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.428 04:10:04 -- accel/accel.sh@18 -- # out=' 00:06:52.428 SPDK Configuration: 00:06:52.428 Core mask: 0x1 00:06:52.428 00:06:52.428 Accel Perf Configuration: 00:06:52.428 Workload Type: crc32c 00:06:52.428 CRC-32C seed: 32 00:06:52.428 Transfer size: 4096 bytes 00:06:52.428 Vector count 1 00:06:52.428 Module: software 00:06:52.428 Queue depth: 32 00:06:52.428 Allocate depth: 32 00:06:52.428 # threads/core: 1 00:06:52.428 Run time: 1 seconds 00:06:52.428 Verify: Yes 00:06:52.428 00:06:52.428 Running for 1 seconds... 00:06:52.428 00:06:52.428 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.428 ------------------------------------------------------------------------------------ 00:06:52.428 0,0 439296/s 1716 MiB/s 0 0 00:06:52.428 ==================================================================================== 00:06:52.428 Total 439296/s 1716 MiB/s 0 0' 00:06:52.428 04:10:04 -- accel/accel.sh@20 -- # IFS=: 00:06:52.428 04:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.428 04:10:04 -- accel/accel.sh@20 -- # read -r var val 00:06:52.428 04:10:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.428 04:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.428 04:10:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.428 04:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.428 04:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.428 04:10:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.428 04:10:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.428 04:10:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.428 04:10:04 -- accel/accel.sh@42 -- # jq -r . 00:06:52.428 [2024-12-06 04:10:04.706032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.428 [2024-12-06 04:10:04.706127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68279 ] 00:06:52.428 [2024-12-06 04:10:04.845151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.428 [2024-12-06 04:10:04.970291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.689 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.689 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.689 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.689 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.689 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.689 04:10:05 -- accel/accel.sh@21 -- # val=0x1 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val=crc32c 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val=32 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.690 04:10:05 -- accel/accel.sh@21 -- # val=software 00:06:52.690 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.690 04:10:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.690 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val=32 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val=32 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val=1 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val=Yes 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:52.691 04:10:05 -- accel/accel.sh@21 -- # val= 00:06:52.691 04:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # IFS=: 00:06:52.691 04:10:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@21 -- # val= 00:06:54.148 04:10:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # IFS=: 00:06:54.148 04:10:06 -- accel/accel.sh@20 -- # read -r var val 00:06:54.148 04:10:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.148 04:10:06 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:54.148 04:10:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.148 00:06:54.148 real 0m3.207s 00:06:54.148 user 0m2.697s 00:06:54.148 sys 0m0.302s 00:06:54.148 04:10:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.148 ************************************ 00:06:54.148 END TEST accel_crc32c 00:06:54.148 04:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:54.148 ************************************ 00:06:54.148 04:10:06 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:54.148 04:10:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:54.148 04:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.148 04:10:06 -- common/autotest_common.sh@10 -- # set +x 00:06:54.148 ************************************ 00:06:54.148 START TEST accel_crc32c_C2 00:06:54.148 ************************************ 00:06:54.148 04:10:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:54.148 04:10:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.148 04:10:06 -- accel/accel.sh@17 -- # local accel_module 00:06:54.148 04:10:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:54.148 04:10:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:54.148 04:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.148 04:10:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.148 04:10:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.148 04:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.148 04:10:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.148 04:10:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.148 04:10:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.148 04:10:06 -- accel/accel.sh@42 -- # jq -r . 00:06:54.148 [2024-12-06 04:10:06.342060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.148 [2024-12-06 04:10:06.342142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68313 ] 00:06:54.148 [2024-12-06 04:10:06.476211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.148 [2024-12-06 04:10:06.593031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.523 04:10:07 -- accel/accel.sh@18 -- # out=' 00:06:55.523 SPDK Configuration: 00:06:55.523 Core mask: 0x1 00:06:55.523 00:06:55.523 Accel Perf Configuration: 00:06:55.523 Workload Type: crc32c 00:06:55.523 CRC-32C seed: 0 00:06:55.523 Transfer size: 4096 bytes 00:06:55.523 Vector count 2 00:06:55.523 Module: software 00:06:55.523 Queue depth: 32 00:06:55.523 Allocate depth: 32 00:06:55.523 # threads/core: 1 00:06:55.523 Run time: 1 seconds 00:06:55.523 Verify: Yes 00:06:55.523 00:06:55.523 Running for 1 seconds... 00:06:55.523 00:06:55.523 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.523 ------------------------------------------------------------------------------------ 00:06:55.523 0,0 355520/s 2777 MiB/s 0 0 00:06:55.523 ==================================================================================== 00:06:55.523 Total 355520/s 1388 MiB/s 0 0' 00:06:55.523 04:10:07 -- accel/accel.sh@20 -- # IFS=: 00:06:55.523 04:10:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.523 04:10:07 -- accel/accel.sh@20 -- # read -r var val 00:06:55.523 04:10:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.523 04:10:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.523 04:10:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.523 04:10:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.523 04:10:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.523 04:10:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.523 04:10:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.523 04:10:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.523 04:10:07 -- accel/accel.sh@42 -- # jq -r . 00:06:55.523 [2024-12-06 04:10:07.899724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.523 [2024-12-06 04:10:07.899837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68333 ] 00:06:55.523 [2024-12-06 04:10:08.035819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.782 [2024-12-06 04:10:08.157589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=0x1 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=crc32c 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=0 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=software 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=32 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=32 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=1 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val=Yes 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 04:10:08 -- accel/accel.sh@21 -- # val= 00:06:55.782 04:10:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 04:10:08 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@21 -- # val= 00:06:57.160 04:10:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # IFS=: 00:06:57.160 04:10:09 -- accel/accel.sh@20 -- # read -r var val 00:06:57.160 04:10:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.160 04:10:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:57.160 04:10:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.160 00:06:57.160 real 0m3.126s 00:06:57.160 user 0m2.643s 00:06:57.160 sys 0m0.279s 00:06:57.160 04:10:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.160 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:57.160 ************************************ 00:06:57.160 END TEST accel_crc32c_C2 00:06:57.160 ************************************ 00:06:57.160 04:10:09 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:57.160 04:10:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:57.160 04:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.160 04:10:09 -- common/autotest_common.sh@10 -- # set +x 00:06:57.160 ************************************ 00:06:57.160 START TEST accel_copy 00:06:57.160 ************************************ 00:06:57.160 04:10:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:57.160 04:10:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.160 04:10:09 -- accel/accel.sh@17 -- # local accel_module 00:06:57.160 04:10:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:57.160 04:10:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:57.160 04:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.160 04:10:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.160 04:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.160 04:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.160 04:10:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.160 04:10:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.160 04:10:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.160 04:10:09 -- accel/accel.sh@42 -- # jq -r . 00:06:57.160 [2024-12-06 04:10:09.518065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.160 [2024-12-06 04:10:09.518157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:06:57.160 [2024-12-06 04:10:09.653702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.419 [2024-12-06 04:10:09.752806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.794 04:10:11 -- accel/accel.sh@18 -- # out=' 00:06:58.794 SPDK Configuration: 00:06:58.794 Core mask: 0x1 00:06:58.794 00:06:58.794 Accel Perf Configuration: 00:06:58.794 Workload Type: copy 00:06:58.794 Transfer size: 4096 bytes 00:06:58.794 Vector count 1 00:06:58.794 Module: software 00:06:58.794 Queue depth: 32 00:06:58.794 Allocate depth: 32 00:06:58.794 # threads/core: 1 00:06:58.794 Run time: 1 seconds 00:06:58.794 Verify: Yes 00:06:58.794 00:06:58.794 Running for 1 seconds... 00:06:58.794 00:06:58.794 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.794 ------------------------------------------------------------------------------------ 00:06:58.794 0,0 313248/s 1223 MiB/s 0 0 00:06:58.794 ==================================================================================== 00:06:58.794 Total 313248/s 1223 MiB/s 0 0' 00:06:58.794 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:58.794 04:10:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:58.794 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:58.794 04:10:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:58.794 04:10:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.794 04:10:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.794 04:10:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.794 04:10:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.794 04:10:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.794 04:10:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.794 04:10:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.794 04:10:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.794 [2024-12-06 04:10:11.044824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.794 [2024-12-06 04:10:11.044926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68387 ] 00:06:58.794 [2024-12-06 04:10:11.178945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.794 [2024-12-06 04:10:11.280283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val=0x1 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val=copy 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val=software 00:06:59.053 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.053 04:10:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.053 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.053 04:10:11 -- accel/accel.sh@21 -- # val=32 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val=32 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val=1 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val=Yes 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:06:59.054 04:10:11 -- accel/accel.sh@21 -- # val= 00:06:59.054 04:10:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # IFS=: 00:06:59.054 04:10:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@21 -- # val= 00:07:00.433 04:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # IFS=: 00:07:00.433 04:10:12 -- accel/accel.sh@20 -- # read -r var val 00:07:00.433 04:10:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.433 04:10:12 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:00.433 04:10:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.433 00:07:00.433 real 0m3.075s 00:07:00.433 user 0m2.579s 00:07:00.433 sys 0m0.290s 00:07:00.433 04:10:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.433 04:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.433 ************************************ 00:07:00.433 END TEST accel_copy 00:07:00.433 ************************************ 00:07:00.433 04:10:12 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.433 04:10:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:00.433 04:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.433 04:10:12 -- common/autotest_common.sh@10 -- # set +x 00:07:00.433 ************************************ 00:07:00.433 START TEST accel_fill 00:07:00.433 ************************************ 00:07:00.433 04:10:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.433 04:10:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.433 04:10:12 -- accel/accel.sh@17 -- # local accel_module 00:07:00.433 04:10:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.433 04:10:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:00.433 04:10:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.433 04:10:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.433 04:10:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.433 04:10:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.433 04:10:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.433 04:10:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.433 04:10:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.433 04:10:12 -- accel/accel.sh@42 -- # jq -r . 00:07:00.433 [2024-12-06 04:10:12.647059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.433 [2024-12-06 04:10:12.647171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68421 ] 00:07:00.433 [2024-12-06 04:10:12.785909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.433 [2024-12-06 04:10:12.906588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.812 04:10:14 -- accel/accel.sh@18 -- # out=' 00:07:01.812 SPDK Configuration: 00:07:01.812 Core mask: 0x1 00:07:01.812 00:07:01.812 Accel Perf Configuration: 00:07:01.812 Workload Type: fill 00:07:01.812 Fill pattern: 0x80 00:07:01.812 Transfer size: 4096 bytes 00:07:01.812 Vector count 1 00:07:01.812 Module: software 00:07:01.812 Queue depth: 64 00:07:01.812 Allocate depth: 64 00:07:01.812 # threads/core: 1 00:07:01.812 Run time: 1 seconds 00:07:01.812 Verify: Yes 00:07:01.812 00:07:01.812 Running for 1 seconds... 00:07:01.812 00:07:01.812 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.812 ------------------------------------------------------------------------------------ 00:07:01.812 0,0 448704/s 1752 MiB/s 0 0 00:07:01.812 ==================================================================================== 00:07:01.812 Total 448704/s 1752 MiB/s 0 0' 00:07:01.812 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.812 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.812 04:10:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.812 04:10:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.812 04:10:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.812 04:10:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.812 04:10:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.812 04:10:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.812 04:10:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.812 04:10:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.812 04:10:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.812 04:10:14 -- accel/accel.sh@42 -- # jq -r . 00:07:01.812 [2024-12-06 04:10:14.220316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.812 [2024-12-06 04:10:14.220432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68441 ] 00:07:01.812 [2024-12-06 04:10:14.354193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.072 [2024-12-06 04:10:14.475592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=0x1 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=fill 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=0x80 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=software 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=64 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=64 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=1 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val=Yes 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:02.072 04:10:14 -- accel/accel.sh@21 -- # val= 00:07:02.072 04:10:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # IFS=: 00:07:02.072 04:10:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@21 -- # val= 00:07:03.451 04:10:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.451 04:10:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.451 04:10:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.451 04:10:15 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:03.451 04:10:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.451 00:07:03.451 real 0m3.137s 00:07:03.451 user 0m2.644s 00:07:03.451 sys 0m0.284s 00:07:03.451 04:10:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.452 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:07:03.452 ************************************ 00:07:03.452 END TEST accel_fill 00:07:03.452 ************************************ 00:07:03.452 04:10:15 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:03.452 04:10:15 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:03.452 04:10:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.452 04:10:15 -- common/autotest_common.sh@10 -- # set +x 00:07:03.452 ************************************ 00:07:03.452 START TEST accel_copy_crc32c 00:07:03.452 ************************************ 00:07:03.452 04:10:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:03.452 04:10:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.452 04:10:15 -- accel/accel.sh@17 -- # local accel_module 00:07:03.452 04:10:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:03.452 04:10:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:03.452 04:10:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.452 04:10:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.452 04:10:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.452 04:10:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.452 04:10:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.452 04:10:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.452 04:10:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.452 04:10:15 -- accel/accel.sh@42 -- # jq -r . 00:07:03.452 [2024-12-06 04:10:15.836453] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.452 [2024-12-06 04:10:15.836557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68481 ] 00:07:03.452 [2024-12-06 04:10:15.972215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.711 [2024-12-06 04:10:16.092861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.089 04:10:17 -- accel/accel.sh@18 -- # out=' 00:07:05.089 SPDK Configuration: 00:07:05.089 Core mask: 0x1 00:07:05.089 00:07:05.089 Accel Perf Configuration: 00:07:05.089 Workload Type: copy_crc32c 00:07:05.089 CRC-32C seed: 0 00:07:05.089 Vector size: 4096 bytes 00:07:05.089 Transfer size: 4096 bytes 00:07:05.089 Vector count 1 00:07:05.089 Module: software 00:07:05.089 Queue depth: 32 00:07:05.089 Allocate depth: 32 00:07:05.089 # threads/core: 1 00:07:05.089 Run time: 1 seconds 00:07:05.089 Verify: Yes 00:07:05.089 00:07:05.089 Running for 1 seconds... 00:07:05.089 00:07:05.089 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.089 ------------------------------------------------------------------------------------ 00:07:05.089 0,0 245728/s 959 MiB/s 0 0 00:07:05.089 ==================================================================================== 00:07:05.089 Total 245728/s 959 MiB/s 0 0' 00:07:05.089 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.089 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.089 04:10:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.089 04:10:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.089 04:10:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.089 04:10:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.089 04:10:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.089 04:10:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.089 04:10:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.089 04:10:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.089 04:10:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.089 04:10:17 -- accel/accel.sh@42 -- # jq -r . 00:07:05.089 [2024-12-06 04:10:17.400675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.089 [2024-12-06 04:10:17.400783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68495 ] 00:07:05.089 [2024-12-06 04:10:17.538257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.349 [2024-12-06 04:10:17.659182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=0x1 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=0 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=software 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=32 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=32 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.349 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.349 04:10:17 -- accel/accel.sh@21 -- # val=1 00:07:05.349 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.350 04:10:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.350 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.350 04:10:17 -- accel/accel.sh@21 -- # val=Yes 00:07:05.350 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.350 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.350 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.350 04:10:17 -- accel/accel.sh@21 -- # val= 00:07:05.350 04:10:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.350 04:10:17 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@21 -- # val= 00:07:06.732 04:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.732 04:10:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.732 04:10:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.732 04:10:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:06.732 04:10:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.732 00:07:06.732 real 0m3.131s 00:07:06.732 user 0m2.628s 00:07:06.732 sys 0m0.295s 00:07:06.732 04:10:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.732 04:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.732 ************************************ 00:07:06.732 END TEST accel_copy_crc32c 00:07:06.732 ************************************ 00:07:06.732 04:10:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.732 04:10:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:06.732 04:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.732 04:10:18 -- common/autotest_common.sh@10 -- # set +x 00:07:06.732 ************************************ 00:07:06.732 START TEST accel_copy_crc32c_C2 00:07:06.732 ************************************ 00:07:06.732 04:10:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.732 04:10:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.732 04:10:18 -- accel/accel.sh@17 -- # local accel_module 00:07:06.732 04:10:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.732 04:10:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.732 04:10:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.732 04:10:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.732 04:10:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.732 04:10:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.732 04:10:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.732 04:10:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.732 04:10:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.732 04:10:19 -- accel/accel.sh@42 -- # jq -r . 00:07:06.732 [2024-12-06 04:10:19.019207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.732 [2024-12-06 04:10:19.019319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68535 ] 00:07:06.732 [2024-12-06 04:10:19.154448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.732 [2024-12-06 04:10:19.276730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.166 04:10:20 -- accel/accel.sh@18 -- # out=' 00:07:08.166 SPDK Configuration: 00:07:08.166 Core mask: 0x1 00:07:08.166 00:07:08.166 Accel Perf Configuration: 00:07:08.166 Workload Type: copy_crc32c 00:07:08.166 CRC-32C seed: 0 00:07:08.166 Vector size: 4096 bytes 00:07:08.166 Transfer size: 8192 bytes 00:07:08.166 Vector count 2 00:07:08.166 Module: software 00:07:08.166 Queue depth: 32 00:07:08.166 Allocate depth: 32 00:07:08.166 # threads/core: 1 00:07:08.166 Run time: 1 seconds 00:07:08.166 Verify: Yes 00:07:08.166 00:07:08.166 Running for 1 seconds... 00:07:08.166 00:07:08.166 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.166 ------------------------------------------------------------------------------------ 00:07:08.166 0,0 175072/s 1367 MiB/s 0 0 00:07:08.166 ==================================================================================== 00:07:08.166 Total 175072/s 683 MiB/s 0 0' 00:07:08.166 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.166 04:10:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:08.166 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.166 04:10:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:08.166 04:10:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.166 04:10:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.166 04:10:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.166 04:10:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.166 04:10:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.166 04:10:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.166 04:10:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.166 04:10:20 -- accel/accel.sh@42 -- # jq -r . 00:07:08.166 [2024-12-06 04:10:20.588420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.166 [2024-12-06 04:10:20.588543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68549 ] 00:07:08.166 [2024-12-06 04:10:20.724823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.426 [2024-12-06 04:10:20.847832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val=0x1 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.426 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.426 04:10:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:08.426 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.426 04:10:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=0 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=software 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=32 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=32 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=1 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val=Yes 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:08.427 04:10:20 -- accel/accel.sh@21 -- # val= 00:07:08.427 04:10:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # IFS=: 00:07:08.427 04:10:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@21 -- # val= 00:07:09.807 04:10:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # IFS=: 00:07:09.807 04:10:22 -- accel/accel.sh@20 -- # read -r var val 00:07:09.807 04:10:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.807 04:10:22 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:09.807 04:10:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.807 00:07:09.807 real 0m3.136s 00:07:09.807 user 0m2.646s 00:07:09.807 sys 0m0.282s 00:07:09.807 04:10:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.807 ************************************ 00:07:09.807 04:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.807 END TEST accel_copy_crc32c_C2 00:07:09.807 ************************************ 00:07:09.807 04:10:22 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:09.807 04:10:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.807 04:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.807 04:10:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.807 ************************************ 00:07:09.807 START TEST accel_dualcast 00:07:09.807 ************************************ 00:07:09.807 04:10:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:09.807 04:10:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.807 04:10:22 -- accel/accel.sh@17 -- # local accel_module 00:07:09.807 04:10:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:09.807 04:10:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.807 04:10:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.807 04:10:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.807 04:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.807 04:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.807 04:10:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.807 04:10:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.807 04:10:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.807 04:10:22 -- accel/accel.sh@42 -- # jq -r . 00:07:09.807 [2024-12-06 04:10:22.209621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.807 [2024-12-06 04:10:22.209730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68589 ] 00:07:09.807 [2024-12-06 04:10:22.346627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.065 [2024-12-06 04:10:22.468325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.446 04:10:23 -- accel/accel.sh@18 -- # out=' 00:07:11.446 SPDK Configuration: 00:07:11.446 Core mask: 0x1 00:07:11.446 00:07:11.446 Accel Perf Configuration: 00:07:11.446 Workload Type: dualcast 00:07:11.446 Transfer size: 4096 bytes 00:07:11.446 Vector count 1 00:07:11.446 Module: software 00:07:11.446 Queue depth: 32 00:07:11.446 Allocate depth: 32 00:07:11.446 # threads/core: 1 00:07:11.446 Run time: 1 seconds 00:07:11.446 Verify: Yes 00:07:11.446 00:07:11.446 Running for 1 seconds... 00:07:11.446 00:07:11.446 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.446 ------------------------------------------------------------------------------------ 00:07:11.446 0,0 350720/s 1370 MiB/s 0 0 00:07:11.446 ==================================================================================== 00:07:11.446 Total 350720/s 1370 MiB/s 0 0' 00:07:11.446 04:10:23 -- accel/accel.sh@20 -- # IFS=: 00:07:11.446 04:10:23 -- accel/accel.sh@20 -- # read -r var val 00:07:11.446 04:10:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:11.446 04:10:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:11.446 04:10:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.446 04:10:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.446 04:10:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.446 04:10:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.446 04:10:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.446 04:10:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.447 04:10:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.447 04:10:23 -- accel/accel.sh@42 -- # jq -r . 00:07:11.447 [2024-12-06 04:10:23.779036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.447 [2024-12-06 04:10:23.779193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68607 ] 00:07:11.447 [2024-12-06 04:10:23.925092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.706 [2024-12-06 04:10:24.046477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=0x1 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=dualcast 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=software 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=32 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=32 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=1 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val=Yes 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.706 04:10:24 -- accel/accel.sh@21 -- # val= 00:07:11.706 04:10:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.706 04:10:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@21 -- # val= 00:07:13.084 04:10:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.084 04:10:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.084 04:10:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.084 04:10:25 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:13.084 04:10:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.084 00:07:13.084 real 0m3.179s 00:07:13.084 user 0m2.698s 00:07:13.084 sys 0m0.271s 00:07:13.084 04:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.084 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:13.084 ************************************ 00:07:13.084 END TEST accel_dualcast 00:07:13.084 ************************************ 00:07:13.084 04:10:25 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:13.084 04:10:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:13.084 04:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.084 04:10:25 -- common/autotest_common.sh@10 -- # set +x 00:07:13.084 ************************************ 00:07:13.084 START TEST accel_compare 00:07:13.084 ************************************ 00:07:13.084 04:10:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:13.084 04:10:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.084 04:10:25 -- accel/accel.sh@17 -- # local accel_module 00:07:13.084 04:10:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:13.084 04:10:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:13.084 04:10:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.084 04:10:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.084 04:10:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.084 04:10:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.084 04:10:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.084 04:10:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.084 04:10:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.084 04:10:25 -- accel/accel.sh@42 -- # jq -r . 00:07:13.084 [2024-12-06 04:10:25.434260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.084 [2024-12-06 04:10:25.434354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68643 ] 00:07:13.084 [2024-12-06 04:10:25.571815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.342 [2024-12-06 04:10:25.660053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.715 04:10:26 -- accel/accel.sh@18 -- # out=' 00:07:14.715 SPDK Configuration: 00:07:14.715 Core mask: 0x1 00:07:14.715 00:07:14.715 Accel Perf Configuration: 00:07:14.715 Workload Type: compare 00:07:14.715 Transfer size: 4096 bytes 00:07:14.716 Vector count 1 00:07:14.716 Module: software 00:07:14.716 Queue depth: 32 00:07:14.716 Allocate depth: 32 00:07:14.716 # threads/core: 1 00:07:14.716 Run time: 1 seconds 00:07:14.716 Verify: Yes 00:07:14.716 00:07:14.716 Running for 1 seconds... 00:07:14.716 00:07:14.716 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.716 ------------------------------------------------------------------------------------ 00:07:14.716 0,0 456032/s 1781 MiB/s 0 0 00:07:14.716 ==================================================================================== 00:07:14.716 Total 456032/s 1781 MiB/s 0 0' 00:07:14.716 04:10:26 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:26 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:14.716 04:10:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.716 04:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.716 04:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.716 04:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.716 04:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.716 04:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.716 04:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.716 04:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.716 04:10:26 -- accel/accel.sh@42 -- # jq -r . 00:07:14.716 [2024-12-06 04:10:26.888874] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.716 [2024-12-06 04:10:26.888968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68657 ] 00:07:14.716 [2024-12-06 04:10:27.027377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.716 [2024-12-06 04:10:27.112228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=0x1 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=compare 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=software 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=32 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=32 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=1 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val=Yes 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.716 04:10:27 -- accel/accel.sh@21 -- # val= 00:07:14.716 04:10:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.716 04:10:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@21 -- # val= 00:07:16.154 04:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.154 04:10:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.154 04:10:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.154 04:10:28 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:16.154 04:10:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.154 00:07:16.154 real 0m2.908s 00:07:16.154 user 0m2.494s 00:07:16.154 sys 0m0.211s 00:07:16.154 04:10:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.154 ************************************ 00:07:16.154 END TEST accel_compare 00:07:16.154 ************************************ 00:07:16.154 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:07:16.154 04:10:28 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.154 04:10:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.154 04:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.154 04:10:28 -- common/autotest_common.sh@10 -- # set +x 00:07:16.154 ************************************ 00:07:16.154 START TEST accel_xor 00:07:16.154 ************************************ 00:07:16.154 04:10:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:16.154 04:10:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.154 04:10:28 -- accel/accel.sh@17 -- # local accel_module 00:07:16.154 04:10:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:16.154 04:10:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.154 04:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.154 04:10:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.154 04:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.154 04:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.154 04:10:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.154 04:10:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.154 04:10:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.154 04:10:28 -- accel/accel.sh@42 -- # jq -r . 00:07:16.155 [2024-12-06 04:10:28.402952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.155 [2024-12-06 04:10:28.403359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68697 ] 00:07:16.155 [2024-12-06 04:10:28.550562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.155 [2024-12-06 04:10:28.639449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.530 04:10:29 -- accel/accel.sh@18 -- # out=' 00:07:17.530 SPDK Configuration: 00:07:17.530 Core mask: 0x1 00:07:17.530 00:07:17.530 Accel Perf Configuration: 00:07:17.530 Workload Type: xor 00:07:17.530 Source buffers: 2 00:07:17.530 Transfer size: 4096 bytes 00:07:17.530 Vector count 1 00:07:17.530 Module: software 00:07:17.530 Queue depth: 32 00:07:17.530 Allocate depth: 32 00:07:17.530 # threads/core: 1 00:07:17.530 Run time: 1 seconds 00:07:17.530 Verify: Yes 00:07:17.530 00:07:17.530 Running for 1 seconds... 00:07:17.530 00:07:17.530 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.530 ------------------------------------------------------------------------------------ 00:07:17.530 0,0 248352/s 970 MiB/s 0 0 00:07:17.530 ==================================================================================== 00:07:17.530 Total 248352/s 970 MiB/s 0 0' 00:07:17.530 04:10:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.530 04:10:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.530 04:10:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:17.530 04:10:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.530 04:10:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.530 04:10:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.530 04:10:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.530 04:10:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.530 04:10:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.530 04:10:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.530 04:10:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.530 04:10:29 -- accel/accel.sh@42 -- # jq -r . 00:07:17.530 [2024-12-06 04:10:29.871933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.530 [2024-12-06 04:10:29.872187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68711 ] 00:07:17.530 [2024-12-06 04:10:30.007463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.789 [2024-12-06 04:10:30.094289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=0x1 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=xor 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=2 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=software 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=32 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=32 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=1 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val=Yes 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.789 04:10:30 -- accel/accel.sh@21 -- # val= 00:07:17.789 04:10:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.789 04:10:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@21 -- # val= 00:07:19.166 04:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.166 04:10:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.166 04:10:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.166 04:10:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.166 04:10:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.166 00:07:19.166 real 0m2.934s 00:07:19.166 user 0m2.486s 00:07:19.166 sys 0m0.241s 00:07:19.166 04:10:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.166 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.166 ************************************ 00:07:19.166 END TEST accel_xor 00:07:19.166 ************************************ 00:07:19.166 04:10:31 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:19.166 04:10:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:19.166 04:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.166 04:10:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.166 ************************************ 00:07:19.166 START TEST accel_xor 00:07:19.166 ************************************ 00:07:19.166 04:10:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:19.166 04:10:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.166 04:10:31 -- accel/accel.sh@17 -- # local accel_module 00:07:19.166 04:10:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:19.166 04:10:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:19.166 04:10:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.166 04:10:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.166 04:10:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.166 04:10:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.166 04:10:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.166 04:10:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.166 04:10:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.166 04:10:31 -- accel/accel.sh@42 -- # jq -r . 00:07:19.166 [2024-12-06 04:10:31.379490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.166 [2024-12-06 04:10:31.379606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68751 ] 00:07:19.166 [2024-12-06 04:10:31.518601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.166 [2024-12-06 04:10:31.604601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.543 04:10:32 -- accel/accel.sh@18 -- # out=' 00:07:20.543 SPDK Configuration: 00:07:20.543 Core mask: 0x1 00:07:20.543 00:07:20.543 Accel Perf Configuration: 00:07:20.543 Workload Type: xor 00:07:20.543 Source buffers: 3 00:07:20.543 Transfer size: 4096 bytes 00:07:20.543 Vector count 1 00:07:20.543 Module: software 00:07:20.543 Queue depth: 32 00:07:20.543 Allocate depth: 32 00:07:20.543 # threads/core: 1 00:07:20.543 Run time: 1 seconds 00:07:20.543 Verify: Yes 00:07:20.543 00:07:20.543 Running for 1 seconds... 00:07:20.543 00:07:20.543 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.543 ------------------------------------------------------------------------------------ 00:07:20.544 0,0 242848/s 948 MiB/s 0 0 00:07:20.544 ==================================================================================== 00:07:20.544 Total 242848/s 948 MiB/s 0 0' 00:07:20.544 04:10:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.544 04:10:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.544 04:10:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.544 04:10:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.544 04:10:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.544 04:10:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.544 04:10:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.544 04:10:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.544 04:10:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.544 04:10:32 -- accel/accel.sh@42 -- # jq -r . 00:07:20.544 [2024-12-06 04:10:32.826928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.544 [2024-12-06 04:10:32.827026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68765 ] 00:07:20.544 [2024-12-06 04:10:32.961220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.544 [2024-12-06 04:10:33.029866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=0x1 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=xor 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=3 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=software 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=32 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=32 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=1 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val=Yes 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.544 04:10:33 -- accel/accel.sh@21 -- # val= 00:07:20.544 04:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.544 04:10:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@21 -- # val= 00:07:21.919 04:10:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.919 04:10:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.919 04:10:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.919 ************************************ 00:07:21.919 END TEST accel_xor 00:07:21.919 ************************************ 00:07:21.919 04:10:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:21.919 04:10:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.919 00:07:21.919 real 0m2.885s 00:07:21.919 user 0m2.453s 00:07:21.919 sys 0m0.224s 00:07:21.919 04:10:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.919 04:10:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.919 04:10:34 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:21.919 04:10:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:21.919 04:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.919 04:10:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.919 ************************************ 00:07:21.919 START TEST accel_dif_verify 00:07:21.919 ************************************ 00:07:21.919 04:10:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:21.919 04:10:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.919 04:10:34 -- accel/accel.sh@17 -- # local accel_module 00:07:21.919 04:10:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:21.919 04:10:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:21.919 04:10:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.919 04:10:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.919 04:10:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.919 04:10:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.919 04:10:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.919 04:10:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.919 04:10:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.919 04:10:34 -- accel/accel.sh@42 -- # jq -r . 00:07:21.919 [2024-12-06 04:10:34.311891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.919 [2024-12-06 04:10:34.311991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68805 ] 00:07:21.919 [2024-12-06 04:10:34.446727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.177 [2024-12-06 04:10:34.528608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.553 04:10:35 -- accel/accel.sh@18 -- # out=' 00:07:23.554 SPDK Configuration: 00:07:23.554 Core mask: 0x1 00:07:23.554 00:07:23.554 Accel Perf Configuration: 00:07:23.554 Workload Type: dif_verify 00:07:23.554 Vector size: 4096 bytes 00:07:23.554 Transfer size: 4096 bytes 00:07:23.554 Block size: 512 bytes 00:07:23.554 Metadata size: 8 bytes 00:07:23.554 Vector count 1 00:07:23.554 Module: software 00:07:23.554 Queue depth: 32 00:07:23.554 Allocate depth: 32 00:07:23.554 # threads/core: 1 00:07:23.554 Run time: 1 seconds 00:07:23.554 Verify: No 00:07:23.554 00:07:23.554 Running for 1 seconds... 00:07:23.554 00:07:23.554 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.554 ------------------------------------------------------------------------------------ 00:07:23.554 0,0 103936/s 412 MiB/s 0 0 00:07:23.554 ==================================================================================== 00:07:23.554 Total 103936/s 406 MiB/s 0 0' 00:07:23.554 04:10:35 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:23.554 04:10:35 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.554 04:10:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.554 04:10:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.554 04:10:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.554 04:10:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.554 04:10:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.554 04:10:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.554 04:10:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.554 04:10:35 -- accel/accel.sh@42 -- # jq -r . 00:07:23.554 [2024-12-06 04:10:35.749803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.554 [2024-12-06 04:10:35.749906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68819 ] 00:07:23.554 [2024-12-06 04:10:35.886795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.554 [2024-12-06 04:10:35.955258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=0x1 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=dif_verify 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=software 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=32 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=32 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=1 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val=No 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:23.554 04:10:36 -- accel/accel.sh@21 -- # val= 00:07:23.554 04:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # IFS=: 00:07:23.554 04:10:36 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 ************************************ 00:07:24.932 END TEST accel_dif_verify 00:07:24.932 ************************************ 00:07:24.932 04:10:37 -- accel/accel.sh@21 -- # val= 00:07:24.932 04:10:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.932 04:10:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.932 04:10:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.932 04:10:37 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:24.932 04:10:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.932 00:07:24.932 real 0m2.871s 00:07:24.932 user 0m2.439s 00:07:24.932 sys 0m0.232s 00:07:24.932 04:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.932 04:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.932 04:10:37 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:24.932 04:10:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:24.932 04:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.932 04:10:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.932 ************************************ 00:07:24.933 START TEST accel_dif_generate 00:07:24.933 ************************************ 00:07:24.933 04:10:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:24.933 04:10:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.933 04:10:37 -- accel/accel.sh@17 -- # local accel_module 00:07:24.933 04:10:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:24.933 04:10:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:24.933 04:10:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.933 04:10:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.933 04:10:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.933 04:10:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.933 04:10:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.933 04:10:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.933 04:10:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.933 04:10:37 -- accel/accel.sh@42 -- # jq -r . 00:07:24.933 [2024-12-06 04:10:37.232653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.933 [2024-12-06 04:10:37.232939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68854 ] 00:07:24.933 [2024-12-06 04:10:37.371456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.933 [2024-12-06 04:10:37.449407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.310 04:10:38 -- accel/accel.sh@18 -- # out=' 00:07:26.310 SPDK Configuration: 00:07:26.310 Core mask: 0x1 00:07:26.310 00:07:26.310 Accel Perf Configuration: 00:07:26.310 Workload Type: dif_generate 00:07:26.310 Vector size: 4096 bytes 00:07:26.310 Transfer size: 4096 bytes 00:07:26.310 Block size: 512 bytes 00:07:26.310 Metadata size: 8 bytes 00:07:26.310 Vector count 1 00:07:26.310 Module: software 00:07:26.310 Queue depth: 32 00:07:26.310 Allocate depth: 32 00:07:26.310 # threads/core: 1 00:07:26.310 Run time: 1 seconds 00:07:26.310 Verify: No 00:07:26.310 00:07:26.310 Running for 1 seconds... 00:07:26.310 00:07:26.310 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.310 ------------------------------------------------------------------------------------ 00:07:26.310 0,0 125792/s 499 MiB/s 0 0 00:07:26.310 ==================================================================================== 00:07:26.310 Total 125792/s 491 MiB/s 0 0' 00:07:26.310 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.310 04:10:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:26.310 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.310 04:10:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.310 04:10:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.310 04:10:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.310 04:10:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.310 04:10:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.310 04:10:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.310 04:10:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.310 04:10:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.310 04:10:38 -- accel/accel.sh@42 -- # jq -r . 00:07:26.310 [2024-12-06 04:10:38.677403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.310 [2024-12-06 04:10:38.678117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68873 ] 00:07:26.310 [2024-12-06 04:10:38.813220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.570 [2024-12-06 04:10:38.885761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=0x1 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=dif_generate 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=software 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=32 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=32 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=1 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val=No 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:26.570 04:10:38 -- accel/accel.sh@21 -- # val= 00:07:26.570 04:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # IFS=: 00:07:26.570 04:10:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 ************************************ 00:07:27.948 END TEST accel_dif_generate 00:07:27.948 ************************************ 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@21 -- # val= 00:07:27.948 04:10:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # IFS=: 00:07:27.948 04:10:40 -- accel/accel.sh@20 -- # read -r var val 00:07:27.948 04:10:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.948 04:10:40 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:27.948 04:10:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.948 00:07:27.948 real 0m2.885s 00:07:27.948 user 0m2.450s 00:07:27.948 sys 0m0.232s 00:07:27.948 04:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.948 04:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 04:10:40 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:27.948 04:10:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:27.948 04:10:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.948 04:10:40 -- common/autotest_common.sh@10 -- # set +x 00:07:27.948 ************************************ 00:07:27.948 START TEST accel_dif_generate_copy 00:07:27.948 ************************************ 00:07:27.948 04:10:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:27.948 04:10:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.948 04:10:40 -- accel/accel.sh@17 -- # local accel_module 00:07:27.948 04:10:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.948 04:10:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.948 04:10:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.948 04:10:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.948 04:10:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.948 04:10:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.948 04:10:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.948 04:10:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.949 04:10:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.949 04:10:40 -- accel/accel.sh@42 -- # jq -r . 00:07:27.949 [2024-12-06 04:10:40.173659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.949 [2024-12-06 04:10:40.173797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68902 ] 00:07:27.949 [2024-12-06 04:10:40.314302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.949 [2024-12-06 04:10:40.400693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.328 04:10:41 -- accel/accel.sh@18 -- # out=' 00:07:29.328 SPDK Configuration: 00:07:29.328 Core mask: 0x1 00:07:29.328 00:07:29.328 Accel Perf Configuration: 00:07:29.328 Workload Type: dif_generate_copy 00:07:29.328 Vector size: 4096 bytes 00:07:29.328 Transfer size: 4096 bytes 00:07:29.328 Vector count 1 00:07:29.328 Module: software 00:07:29.328 Queue depth: 32 00:07:29.328 Allocate depth: 32 00:07:29.328 # threads/core: 1 00:07:29.328 Run time: 1 seconds 00:07:29.328 Verify: No 00:07:29.328 00:07:29.328 Running for 1 seconds... 00:07:29.328 00:07:29.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.328 ------------------------------------------------------------------------------------ 00:07:29.328 0,0 93344/s 370 MiB/s 0 0 00:07:29.328 ==================================================================================== 00:07:29.328 Total 93344/s 364 MiB/s 0 0' 00:07:29.328 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.328 04:10:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:29.328 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.328 04:10:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:29.328 04:10:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.328 04:10:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.328 04:10:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.328 04:10:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.328 04:10:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.328 04:10:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.328 04:10:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.328 04:10:41 -- accel/accel.sh@42 -- # jq -r . 00:07:29.328 [2024-12-06 04:10:41.636693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.328 [2024-12-06 04:10:41.636931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68926 ] 00:07:29.328 [2024-12-06 04:10:41.771918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.328 [2024-12-06 04:10:41.857740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=0x1 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=software 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=32 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=32 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=1 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val=No 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:29.588 04:10:41 -- accel/accel.sh@21 -- # val= 00:07:29.588 04:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # IFS=: 00:07:29.588 04:10:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.658 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@21 -- # val= 00:07:30.659 04:10:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # IFS=: 00:07:30.659 04:10:43 -- accel/accel.sh@20 -- # read -r var val 00:07:30.659 04:10:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.659 04:10:43 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:30.659 04:10:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.659 00:07:30.659 real 0m2.907s 00:07:30.659 user 0m2.464s 00:07:30.659 sys 0m0.237s 00:07:30.659 04:10:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.659 ************************************ 00:07:30.659 END TEST accel_dif_generate_copy 00:07:30.659 ************************************ 00:07:30.659 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:30.659 04:10:43 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:30.659 04:10:43 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.659 04:10:43 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:30.659 04:10:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.659 04:10:43 -- common/autotest_common.sh@10 -- # set +x 00:07:30.659 ************************************ 00:07:30.659 START TEST accel_comp 00:07:30.659 ************************************ 00:07:30.659 04:10:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.659 04:10:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.659 04:10:43 -- accel/accel.sh@17 -- # local accel_module 00:07:30.659 04:10:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.659 04:10:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.659 04:10:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.659 04:10:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.659 04:10:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.659 04:10:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.659 04:10:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.659 04:10:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.659 04:10:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.659 04:10:43 -- accel/accel.sh@42 -- # jq -r . 00:07:30.659 [2024-12-06 04:10:43.130829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.659 [2024-12-06 04:10:43.130933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68956 ] 00:07:30.935 [2024-12-06 04:10:43.274565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.935 [2024-12-06 04:10:43.352413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.312 04:10:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.312 00:07:32.312 SPDK Configuration: 00:07:32.312 Core mask: 0x1 00:07:32.312 00:07:32.312 Accel Perf Configuration: 00:07:32.312 Workload Type: compress 00:07:32.312 Transfer size: 4096 bytes 00:07:32.312 Vector count 1 00:07:32.312 Module: software 00:07:32.312 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.312 Queue depth: 32 00:07:32.312 Allocate depth: 32 00:07:32.312 # threads/core: 1 00:07:32.312 Run time: 1 seconds 00:07:32.312 Verify: No 00:07:32.312 00:07:32.312 Running for 1 seconds... 00:07:32.312 00:07:32.312 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.312 ------------------------------------------------------------------------------------ 00:07:32.312 0,0 49600/s 206 MiB/s 0 0 00:07:32.312 ==================================================================================== 00:07:32.312 Total 49600/s 193 MiB/s 0 0' 00:07:32.312 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.312 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.312 04:10:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.313 04:10:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.313 04:10:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.313 04:10:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.313 04:10:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.313 04:10:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.313 04:10:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.313 04:10:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.313 04:10:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.313 04:10:44 -- accel/accel.sh@42 -- # jq -r . 00:07:32.313 [2024-12-06 04:10:44.573720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.313 [2024-12-06 04:10:44.574033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68978 ] 00:07:32.313 [2024-12-06 04:10:44.711889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.313 [2024-12-06 04:10:44.786881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=0x1 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=compress 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=software 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=32 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=32 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=1 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val=No 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.313 04:10:44 -- accel/accel.sh@21 -- # val= 00:07:32.313 04:10:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.313 04:10:44 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@21 -- # val= 00:07:33.687 04:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # IFS=: 00:07:33.687 04:10:45 -- accel/accel.sh@20 -- # read -r var val 00:07:33.687 04:10:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.687 04:10:45 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:33.687 04:10:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.687 ************************************ 00:07:33.687 END TEST accel_comp 00:07:33.687 ************************************ 00:07:33.687 00:07:33.687 real 0m2.883s 00:07:33.687 user 0m2.443s 00:07:33.687 sys 0m0.235s 00:07:33.687 04:10:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.687 04:10:45 -- common/autotest_common.sh@10 -- # set +x 00:07:33.687 04:10:46 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.687 04:10:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:33.687 04:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.687 04:10:46 -- common/autotest_common.sh@10 -- # set +x 00:07:33.687 ************************************ 00:07:33.687 START TEST accel_decomp 00:07:33.687 ************************************ 00:07:33.687 04:10:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.687 04:10:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.687 04:10:46 -- accel/accel.sh@17 -- # local accel_module 00:07:33.687 04:10:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.687 04:10:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:33.687 04:10:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.687 04:10:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.687 04:10:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.687 04:10:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.687 04:10:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.687 04:10:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.687 04:10:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.687 04:10:46 -- accel/accel.sh@42 -- # jq -r . 00:07:33.687 [2024-12-06 04:10:46.067628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.687 [2024-12-06 04:10:46.067738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69012 ] 00:07:33.687 [2024-12-06 04:10:46.204680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.945 [2024-12-06 04:10:46.275329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.322 04:10:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.322 00:07:35.322 SPDK Configuration: 00:07:35.322 Core mask: 0x1 00:07:35.322 00:07:35.322 Accel Perf Configuration: 00:07:35.322 Workload Type: decompress 00:07:35.322 Transfer size: 4096 bytes 00:07:35.322 Vector count 1 00:07:35.322 Module: software 00:07:35.322 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.322 Queue depth: 32 00:07:35.322 Allocate depth: 32 00:07:35.322 # threads/core: 1 00:07:35.322 Run time: 1 seconds 00:07:35.322 Verify: Yes 00:07:35.322 00:07:35.322 Running for 1 seconds... 00:07:35.322 00:07:35.322 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.322 ------------------------------------------------------------------------------------ 00:07:35.322 0,0 71424/s 131 MiB/s 0 0 00:07:35.322 ==================================================================================== 00:07:35.322 Total 71424/s 279 MiB/s 0 0' 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.322 04:10:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.322 04:10:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.322 04:10:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.322 04:10:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.322 04:10:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.322 04:10:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.322 04:10:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.322 04:10:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.322 04:10:47 -- accel/accel.sh@42 -- # jq -r . 00:07:35.322 [2024-12-06 04:10:47.494890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.322 [2024-12-06 04:10:47.495153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69032 ] 00:07:35.322 [2024-12-06 04:10:47.628257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.322 [2024-12-06 04:10:47.718649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=0x1 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=decompress 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=software 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=32 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=32 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=1 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val=Yes 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.322 04:10:47 -- accel/accel.sh@21 -- # val= 00:07:35.322 04:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.322 04:10:47 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 ************************************ 00:07:36.697 END TEST accel_decomp 00:07:36.697 ************************************ 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@21 -- # val= 00:07:36.697 04:10:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 04:10:48 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 04:10:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.697 04:10:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:36.698 04:10:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.698 00:07:36.698 real 0m2.887s 00:07:36.698 user 0m2.462s 00:07:36.698 sys 0m0.224s 00:07:36.698 04:10:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.698 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:36.698 04:10:48 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.698 04:10:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:36.698 04:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.698 04:10:48 -- common/autotest_common.sh@10 -- # set +x 00:07:36.698 ************************************ 00:07:36.698 START TEST accel_decmop_full 00:07:36.698 ************************************ 00:07:36.698 04:10:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.698 04:10:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.698 04:10:48 -- accel/accel.sh@17 -- # local accel_module 00:07:36.698 04:10:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.698 04:10:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:36.698 04:10:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.698 04:10:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.698 04:10:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.698 04:10:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.698 04:10:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.698 04:10:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.698 04:10:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.698 04:10:48 -- accel/accel.sh@42 -- # jq -r . 00:07:36.698 [2024-12-06 04:10:49.008294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.698 [2024-12-06 04:10:49.008640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69061 ] 00:07:36.698 [2024-12-06 04:10:49.147141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.698 [2024-12-06 04:10:49.209001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.072 04:10:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.072 00:07:38.072 SPDK Configuration: 00:07:38.072 Core mask: 0x1 00:07:38.072 00:07:38.072 Accel Perf Configuration: 00:07:38.072 Workload Type: decompress 00:07:38.072 Transfer size: 111250 bytes 00:07:38.072 Vector count 1 00:07:38.072 Module: software 00:07:38.072 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.072 Queue depth: 32 00:07:38.072 Allocate depth: 32 00:07:38.072 # threads/core: 1 00:07:38.072 Run time: 1 seconds 00:07:38.072 Verify: Yes 00:07:38.072 00:07:38.072 Running for 1 seconds... 00:07:38.072 00:07:38.072 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.072 ------------------------------------------------------------------------------------ 00:07:38.072 0,0 4800/s 198 MiB/s 0 0 00:07:38.072 ==================================================================================== 00:07:38.072 Total 4800/s 509 MiB/s 0 0' 00:07:38.072 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.072 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.072 04:10:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.072 04:10:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.072 04:10:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.072 04:10:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.072 04:10:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.072 04:10:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.072 04:10:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.072 04:10:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.072 04:10:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.072 04:10:50 -- accel/accel.sh@42 -- # jq -r . 00:07:38.072 [2024-12-06 04:10:50.437839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.072 [2024-12-06 04:10:50.437925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69080 ] 00:07:38.072 [2024-12-06 04:10:50.575446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.331 [2024-12-06 04:10:50.635708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val=0x1 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val=decompress 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val=software 00:07:38.331 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.331 04:10:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.331 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.331 04:10:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val=32 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val=32 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val=1 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val=Yes 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.332 04:10:50 -- accel/accel.sh@21 -- # val= 00:07:38.332 04:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # IFS=: 00:07:38.332 04:10:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@21 -- # val= 00:07:39.709 04:10:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.709 04:10:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.709 04:10:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.709 04:10:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.709 04:10:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.709 00:07:39.709 real 0m2.869s 00:07:39.709 user 0m2.432s 00:07:39.709 sys 0m0.230s 00:07:39.709 04:10:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.709 04:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:39.709 ************************************ 00:07:39.709 END TEST accel_decmop_full 00:07:39.709 ************************************ 00:07:39.709 04:10:51 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.709 04:10:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:39.709 04:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.709 04:10:51 -- common/autotest_common.sh@10 -- # set +x 00:07:39.709 ************************************ 00:07:39.709 START TEST accel_decomp_mcore 00:07:39.709 ************************************ 00:07:39.709 04:10:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.709 04:10:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.709 04:10:51 -- accel/accel.sh@17 -- # local accel_module 00:07:39.709 04:10:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.709 04:10:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:39.709 04:10:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.709 04:10:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.709 04:10:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.709 04:10:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.709 04:10:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.709 04:10:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.709 04:10:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.709 04:10:51 -- accel/accel.sh@42 -- # jq -r . 00:07:39.709 [2024-12-06 04:10:51.933549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.709 [2024-12-06 04:10:51.933647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69115 ] 00:07:39.709 [2024-12-06 04:10:52.073019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.709 [2024-12-06 04:10:52.143468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.709 [2024-12-06 04:10:52.143550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.709 [2024-12-06 04:10:52.143636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.709 [2024-12-06 04:10:52.143636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.086 04:10:53 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.086 00:07:41.086 SPDK Configuration: 00:07:41.086 Core mask: 0xf 00:07:41.086 00:07:41.086 Accel Perf Configuration: 00:07:41.086 Workload Type: decompress 00:07:41.086 Transfer size: 4096 bytes 00:07:41.086 Vector count 1 00:07:41.086 Module: software 00:07:41.086 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.086 Queue depth: 32 00:07:41.086 Allocate depth: 32 00:07:41.086 # threads/core: 1 00:07:41.086 Run time: 1 seconds 00:07:41.086 Verify: Yes 00:07:41.086 00:07:41.086 Running for 1 seconds... 00:07:41.086 00:07:41.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.086 ------------------------------------------------------------------------------------ 00:07:41.086 0,0 61152/s 112 MiB/s 0 0 00:07:41.086 3,0 59136/s 108 MiB/s 0 0 00:07:41.086 2,0 58464/s 107 MiB/s 0 0 00:07:41.086 1,0 59296/s 109 MiB/s 0 0 00:07:41.086 ==================================================================================== 00:07:41.086 Total 238048/s 929 MiB/s 0 0' 00:07:41.086 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.086 04:10:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.086 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.086 04:10:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.086 04:10:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.086 04:10:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.086 04:10:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.086 04:10:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.086 04:10:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.086 04:10:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.086 04:10:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.086 04:10:53 -- accel/accel.sh@42 -- # jq -r . 00:07:41.086 [2024-12-06 04:10:53.365858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.086 [2024-12-06 04:10:53.366081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69137 ] 00:07:41.086 [2024-12-06 04:10:53.497290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.086 [2024-12-06 04:10:53.571502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.087 [2024-12-06 04:10:53.571554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.087 [2024-12-06 04:10:53.571679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.087 [2024-12-06 04:10:53.571684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=0xf 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=decompress 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=software 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=32 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=32 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=1 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val=Yes 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:41.087 04:10:53 -- accel/accel.sh@21 -- # val= 00:07:41.087 04:10:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # IFS=: 00:07:41.087 04:10:53 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@21 -- # val= 00:07:42.462 04:10:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # IFS=: 00:07:42.462 04:10:54 -- accel/accel.sh@20 -- # read -r var val 00:07:42.462 04:10:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.462 04:10:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:42.462 04:10:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.462 00:07:42.462 real 0m2.885s 00:07:42.462 user 0m9.210s 00:07:42.462 sys 0m0.245s 00:07:42.462 04:10:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.462 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:42.462 ************************************ 00:07:42.462 END TEST accel_decomp_mcore 00:07:42.462 ************************************ 00:07:42.462 04:10:54 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.462 04:10:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:42.462 04:10:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.462 04:10:54 -- common/autotest_common.sh@10 -- # set +x 00:07:42.462 ************************************ 00:07:42.462 START TEST accel_decomp_full_mcore 00:07:42.462 ************************************ 00:07:42.462 04:10:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.462 04:10:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.462 04:10:54 -- accel/accel.sh@17 -- # local accel_module 00:07:42.462 04:10:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.462 04:10:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.462 04:10:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:42.462 04:10:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.462 04:10:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.462 04:10:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.462 04:10:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.462 04:10:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.462 04:10:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.462 04:10:54 -- accel/accel.sh@42 -- # jq -r . 00:07:42.462 [2024-12-06 04:10:54.869471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.462 [2024-12-06 04:10:54.869560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:07:42.462 [2024-12-06 04:10:55.008413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.733 [2024-12-06 04:10:55.089145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.733 [2024-12-06 04:10:55.089271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.733 [2024-12-06 04:10:55.089428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.733 [2024-12-06 04:10:55.089511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.116 04:10:56 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.116 00:07:44.116 SPDK Configuration: 00:07:44.116 Core mask: 0xf 00:07:44.116 00:07:44.116 Accel Perf Configuration: 00:07:44.116 Workload Type: decompress 00:07:44.116 Transfer size: 111250 bytes 00:07:44.116 Vector count 1 00:07:44.116 Module: software 00:07:44.116 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.116 Queue depth: 32 00:07:44.116 Allocate depth: 32 00:07:44.116 # threads/core: 1 00:07:44.116 Run time: 1 seconds 00:07:44.116 Verify: Yes 00:07:44.116 00:07:44.116 Running for 1 seconds... 00:07:44.116 00:07:44.116 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.116 ------------------------------------------------------------------------------------ 00:07:44.116 0,0 4608/s 190 MiB/s 0 0 00:07:44.116 3,0 4608/s 190 MiB/s 0 0 00:07:44.116 2,0 4608/s 190 MiB/s 0 0 00:07:44.116 1,0 4608/s 190 MiB/s 0 0 00:07:44.116 ==================================================================================== 00:07:44.116 Total 18432/s 1955 MiB/s 0 0' 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.116 04:10:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.116 04:10:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.116 04:10:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.116 04:10:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.116 04:10:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.116 04:10:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.116 04:10:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.116 04:10:56 -- accel/accel.sh@42 -- # jq -r . 00:07:44.116 [2024-12-06 04:10:56.340298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.116 [2024-12-06 04:10:56.340554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69192 ] 00:07:44.116 [2024-12-06 04:10:56.478820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.116 [2024-12-06 04:10:56.547122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.116 [2024-12-06 04:10:56.547269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.116 [2024-12-06 04:10:56.547361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.116 [2024-12-06 04:10:56.547563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=0xf 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=decompress 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=software 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=32 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=32 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=1 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val=Yes 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.116 04:10:56 -- accel/accel.sh@21 -- # val= 00:07:44.116 04:10:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.116 04:10:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@21 -- # val= 00:07:45.494 04:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # IFS=: 00:07:45.494 04:10:57 -- accel/accel.sh@20 -- # read -r var val 00:07:45.494 04:10:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.494 04:10:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.494 04:10:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.494 00:07:45.494 real 0m2.926s 00:07:45.494 user 0m9.331s 00:07:45.495 sys 0m0.251s 00:07:45.495 04:10:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.495 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.495 ************************************ 00:07:45.495 END TEST accel_decomp_full_mcore 00:07:45.495 ************************************ 00:07:45.495 04:10:57 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.495 04:10:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:45.495 04:10:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.495 04:10:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.495 ************************************ 00:07:45.495 START TEST accel_decomp_mthread 00:07:45.495 ************************************ 00:07:45.495 04:10:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.495 04:10:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.495 04:10:57 -- accel/accel.sh@17 -- # local accel_module 00:07:45.495 04:10:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.495 04:10:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:45.495 04:10:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.495 04:10:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.495 04:10:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.495 04:10:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.495 04:10:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.495 04:10:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.495 04:10:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.495 04:10:57 -- accel/accel.sh@42 -- # jq -r . 00:07:45.495 [2024-12-06 04:10:57.845333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.495 [2024-12-06 04:10:57.845470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69235 ] 00:07:45.495 [2024-12-06 04:10:57.983616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.495 [2024-12-06 04:10:58.051043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.874 04:10:59 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.874 00:07:46.874 SPDK Configuration: 00:07:46.874 Core mask: 0x1 00:07:46.874 00:07:46.874 Accel Perf Configuration: 00:07:46.874 Workload Type: decompress 00:07:46.874 Transfer size: 4096 bytes 00:07:46.874 Vector count 1 00:07:46.874 Module: software 00:07:46.874 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.874 Queue depth: 32 00:07:46.874 Allocate depth: 32 00:07:46.874 # threads/core: 2 00:07:46.874 Run time: 1 seconds 00:07:46.874 Verify: Yes 00:07:46.874 00:07:46.874 Running for 1 seconds... 00:07:46.874 00:07:46.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.874 ------------------------------------------------------------------------------------ 00:07:46.874 0,1 36096/s 66 MiB/s 0 0 00:07:46.874 0,0 35968/s 66 MiB/s 0 0 00:07:46.874 ==================================================================================== 00:07:46.874 Total 72064/s 281 MiB/s 0 0' 00:07:46.874 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:46.874 04:10:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.874 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:46.874 04:10:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.874 04:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.874 04:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.875 04:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.875 04:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.875 04:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.875 04:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.875 04:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.875 04:10:59 -- accel/accel.sh@42 -- # jq -r . 00:07:46.875 [2024-12-06 04:10:59.281534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.875 [2024-12-06 04:10:59.281657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69249 ] 00:07:46.875 [2024-12-06 04:10:59.418809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.134 [2024-12-06 04:10:59.496750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.134 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.134 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.134 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.134 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.134 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.134 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.134 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.134 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.134 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.134 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.134 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=0x1 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=decompress 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=software 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=32 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=32 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=2 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val=Yes 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.135 04:10:59 -- accel/accel.sh@21 -- # val= 00:07:47.135 04:10:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.135 04:10:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@21 -- # val= 00:07:48.513 04:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # IFS=: 00:07:48.513 04:11:00 -- accel/accel.sh@20 -- # read -r var val 00:07:48.513 04:11:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.513 04:11:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.513 04:11:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.513 00:07:48.513 real 0m2.888s 00:07:48.513 user 0m2.461s 00:07:48.513 sys 0m0.223s 00:07:48.513 04:11:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.513 04:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:48.513 ************************************ 00:07:48.513 END TEST accel_decomp_mthread 00:07:48.513 ************************************ 00:07:48.513 04:11:00 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.513 04:11:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:48.513 04:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.513 04:11:00 -- common/autotest_common.sh@10 -- # set +x 00:07:48.513 ************************************ 00:07:48.513 START TEST accel_deomp_full_mthread 00:07:48.513 ************************************ 00:07:48.513 04:11:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.513 04:11:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.513 04:11:00 -- accel/accel.sh@17 -- # local accel_module 00:07:48.513 04:11:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.513 04:11:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.513 04:11:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.513 04:11:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.513 04:11:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.513 04:11:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.513 04:11:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.513 04:11:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.513 04:11:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.513 04:11:00 -- accel/accel.sh@42 -- # jq -r . 00:07:48.513 [2024-12-06 04:11:00.784615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.513 [2024-12-06 04:11:00.784889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69289 ] 00:07:48.513 [2024-12-06 04:11:00.923475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.513 [2024-12-06 04:11:00.998727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.889 04:11:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:49.889 00:07:49.889 SPDK Configuration: 00:07:49.889 Core mask: 0x1 00:07:49.889 00:07:49.889 Accel Perf Configuration: 00:07:49.889 Workload Type: decompress 00:07:49.889 Transfer size: 111250 bytes 00:07:49.889 Vector count 1 00:07:49.889 Module: software 00:07:49.889 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.889 Queue depth: 32 00:07:49.889 Allocate depth: 32 00:07:49.889 # threads/core: 2 00:07:49.889 Run time: 1 seconds 00:07:49.889 Verify: Yes 00:07:49.889 00:07:49.889 Running for 1 seconds... 00:07:49.889 00:07:49.889 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.889 ------------------------------------------------------------------------------------ 00:07:49.889 0,1 2400/s 99 MiB/s 0 0 00:07:49.889 0,0 2368/s 97 MiB/s 0 0 00:07:49.889 ==================================================================================== 00:07:49.889 Total 4768/s 505 MiB/s 0 0' 00:07:49.890 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.890 04:11:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.890 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.890 04:11:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.890 04:11:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.890 04:11:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.890 04:11:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.890 04:11:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.890 04:11:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.890 04:11:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.890 04:11:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.890 04:11:02 -- accel/accel.sh@42 -- # jq -r . 00:07:49.890 [2024-12-06 04:11:02.255599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.890 [2024-12-06 04:11:02.255889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:07:49.890 [2024-12-06 04:11:02.393274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.231 [2024-12-06 04:11:02.454939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.231 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.231 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.231 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.231 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.231 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.231 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=0x1 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=decompress 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=software 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=32 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=32 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=2 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val=Yes 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:50.232 04:11:02 -- accel/accel.sh@21 -- # val= 00:07:50.232 04:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # IFS=: 00:07:50.232 04:11:02 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@21 -- # val= 00:07:51.185 04:11:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # IFS=: 00:07:51.185 04:11:03 -- accel/accel.sh@20 -- # read -r var val 00:07:51.185 04:11:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.185 04:11:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:51.185 04:11:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.185 00:07:51.185 real 0m2.923s 00:07:51.185 user 0m2.492s 00:07:51.185 sys 0m0.228s 00:07:51.185 04:11:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.185 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:51.185 ************************************ 00:07:51.185 END TEST accel_deomp_full_mthread 00:07:51.185 ************************************ 00:07:51.185 04:11:03 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:51.185 04:11:03 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.185 04:11:03 -- accel/accel.sh@129 -- # build_accel_config 00:07:51.185 04:11:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.185 04:11:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:51.185 04:11:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.185 04:11:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.185 04:11:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.185 04:11:03 -- common/autotest_common.sh@10 -- # set +x 00:07:51.185 04:11:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.185 04:11:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.185 04:11:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.185 04:11:03 -- accel/accel.sh@42 -- # jq -r . 00:07:51.185 ************************************ 00:07:51.185 START TEST accel_dif_functional_tests 00:07:51.185 ************************************ 00:07:51.185 04:11:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.445 [2024-12-06 04:11:03.787102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.445 [2024-12-06 04:11:03.787201] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:07:51.445 [2024-12-06 04:11:03.924994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.445 [2024-12-06 04:11:03.990015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.445 [2024-12-06 04:11:03.990138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.445 [2024-12-06 04:11:03.990144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.704 00:07:51.704 00:07:51.704 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.704 http://cunit.sourceforge.net/ 00:07:51.704 00:07:51.704 00:07:51.704 Suite: accel_dif 00:07:51.704 Test: verify: DIF generated, GUARD check ...passed 00:07:51.704 Test: verify: DIF generated, APPTAG check ...passed 00:07:51.704 Test: verify: DIF generated, REFTAG check ...passed 00:07:51.704 Test: verify: DIF not generated, GUARD check ...[2024-12-06 04:11:04.079721] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.704 [2024-12-06 04:11:04.079808] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.704 passed 00:07:51.704 Test: verify: DIF not generated, APPTAG check ...passed 00:07:51.704 Test: verify: DIF not generated, REFTAG check ...[2024-12-06 04:11:04.079847] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.704 [2024-12-06 04:11:04.079874] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.704 [2024-12-06 04:11:04.079900] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.704 passed 00:07:51.704 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:51.704 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-06 04:11:04.079971] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.704 [2024-12-06 04:11:04.080142] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:51.704 passed 00:07:51.704 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:51.704 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:51.705 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:51.705 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-12-06 04:11:04.080330] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:51.705 00:07:51.705 Test: generate copy: DIF generated, GUARD check ...passed 00:07:51.705 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:51.705 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:51.705 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:51.705 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:51.705 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:51.705 Test: generate copy: iovecs-len validate ...[2024-12-06 04:11:04.080903] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:51.705 passed 00:07:51.705 Test: generate copy: buffer alignment validate ...passed 00:07:51.705 00:07:51.705 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.705 suites 1 1 n/a 0 0 00:07:51.705 tests 20 20 20 0 0 00:07:51.705 asserts 204 204 204 0 n/a 00:07:51.705 00:07:51.705 Elapsed time = 0.005 seconds 00:07:51.705 00:07:51.705 real 0m0.526s 00:07:51.705 user 0m0.717s 00:07:51.705 sys 0m0.151s 00:07:51.705 04:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.964 ************************************ 00:07:51.964 END TEST accel_dif_functional_tests 00:07:51.964 ************************************ 00:07:51.964 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.964 00:07:51.964 real 1m4.824s 00:07:51.964 user 1m8.347s 00:07:51.964 sys 0m6.833s 00:07:51.964 04:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.964 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.964 ************************************ 00:07:51.964 END TEST accel 00:07:51.964 ************************************ 00:07:51.964 04:11:04 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.964 04:11:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.964 04:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.964 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.964 ************************************ 00:07:51.964 START TEST accel_rpc 00:07:51.964 ************************************ 00:07:51.964 04:11:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:51.964 * Looking for test storage... 00:07:51.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:51.964 04:11:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.964 04:11:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.964 04:11:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.964 04:11:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.964 04:11:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.964 04:11:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.964 04:11:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.964 04:11:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.964 04:11:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.964 04:11:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.964 04:11:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.964 04:11:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.964 04:11:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.965 04:11:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.965 04:11:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.965 04:11:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.965 04:11:04 -- scripts/common.sh@344 -- # : 1 00:07:51.965 04:11:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.965 04:11:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.965 04:11:04 -- scripts/common.sh@364 -- # decimal 1 00:07:51.965 04:11:04 -- scripts/common.sh@352 -- # local d=1 00:07:51.965 04:11:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.965 04:11:04 -- scripts/common.sh@354 -- # echo 1 00:07:51.965 04:11:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.965 04:11:04 -- scripts/common.sh@365 -- # decimal 2 00:07:51.965 04:11:04 -- scripts/common.sh@352 -- # local d=2 00:07:51.965 04:11:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.965 04:11:04 -- scripts/common.sh@354 -- # echo 2 00:07:51.965 04:11:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.965 04:11:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.965 04:11:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.965 04:11:04 -- scripts/common.sh@367 -- # return 0 00:07:51.965 04:11:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.224 04:11:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:52.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.224 --rc genhtml_branch_coverage=1 00:07:52.225 --rc genhtml_function_coverage=1 00:07:52.225 --rc genhtml_legend=1 00:07:52.225 --rc geninfo_all_blocks=1 00:07:52.225 --rc geninfo_unexecuted_blocks=1 00:07:52.225 00:07:52.225 ' 00:07:52.225 04:11:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:52.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.225 --rc genhtml_branch_coverage=1 00:07:52.225 --rc genhtml_function_coverage=1 00:07:52.225 --rc genhtml_legend=1 00:07:52.225 --rc geninfo_all_blocks=1 00:07:52.225 --rc geninfo_unexecuted_blocks=1 00:07:52.225 00:07:52.225 ' 00:07:52.225 04:11:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:52.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.225 --rc genhtml_branch_coverage=1 00:07:52.225 --rc genhtml_function_coverage=1 00:07:52.225 --rc genhtml_legend=1 00:07:52.225 --rc geninfo_all_blocks=1 00:07:52.225 --rc geninfo_unexecuted_blocks=1 00:07:52.225 00:07:52.225 ' 00:07:52.225 04:11:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:52.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.225 --rc genhtml_branch_coverage=1 00:07:52.225 --rc genhtml_function_coverage=1 00:07:52.225 --rc genhtml_legend=1 00:07:52.225 --rc geninfo_all_blocks=1 00:07:52.225 --rc geninfo_unexecuted_blocks=1 00:07:52.225 00:07:52.225 ' 00:07:52.225 04:11:04 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:52.225 04:11:04 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69410 00:07:52.225 04:11:04 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:52.225 04:11:04 -- accel/accel_rpc.sh@15 -- # waitforlisten 69410 00:07:52.225 04:11:04 -- common/autotest_common.sh@829 -- # '[' -z 69410 ']' 00:07:52.225 04:11:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.225 04:11:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.225 04:11:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.225 04:11:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.225 04:11:04 -- common/autotest_common.sh@10 -- # set +x 00:07:52.225 [2024-12-06 04:11:04.586889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.225 [2024-12-06 04:11:04.587199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69410 ] 00:07:52.225 [2024-12-06 04:11:04.726561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.484 [2024-12-06 04:11:04.808375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.484 [2024-12-06 04:11:04.808549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.049 04:11:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.049 04:11:05 -- common/autotest_common.sh@862 -- # return 0 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:53.049 04:11:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.049 04:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.049 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.049 ************************************ 00:07:53.049 START TEST accel_assign_opcode 00:07:53.049 ************************************ 00:07:53.049 04:11:05 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:53.049 04:11:05 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:53.307 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.307 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 [2024-12-06 04:11:05.617087] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:53.307 04:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.307 04:11:05 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:53.307 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.307 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 [2024-12-06 04:11:05.625079] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:53.307 04:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.307 04:11:05 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:53.307 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.307 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 04:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.307 04:11:05 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:53.307 04:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.307 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 04:11:05 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:53.307 04:11:05 -- accel/accel_rpc.sh@42 -- # grep software 00:07:53.565 04:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.565 software 00:07:53.565 ************************************ 00:07:53.565 END TEST accel_assign_opcode 00:07:53.565 ************************************ 00:07:53.565 00:07:53.565 real 0m0.301s 00:07:53.565 user 0m0.055s 00:07:53.565 sys 0m0.012s 00:07:53.565 04:11:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.565 04:11:05 -- common/autotest_common.sh@10 -- # set +x 00:07:53.565 04:11:05 -- accel/accel_rpc.sh@55 -- # killprocess 69410 00:07:53.565 04:11:05 -- common/autotest_common.sh@936 -- # '[' -z 69410 ']' 00:07:53.565 04:11:05 -- common/autotest_common.sh@940 -- # kill -0 69410 00:07:53.565 04:11:05 -- common/autotest_common.sh@941 -- # uname 00:07:53.565 04:11:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.565 04:11:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69410 00:07:53.565 killing process with pid 69410 00:07:53.565 04:11:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.565 04:11:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.565 04:11:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69410' 00:07:53.565 04:11:05 -- common/autotest_common.sh@955 -- # kill 69410 00:07:53.565 04:11:05 -- common/autotest_common.sh@960 -- # wait 69410 00:07:53.823 ************************************ 00:07:53.823 END TEST accel_rpc 00:07:53.823 ************************************ 00:07:53.823 00:07:53.823 real 0m2.006s 00:07:53.823 user 0m2.120s 00:07:53.823 sys 0m0.465s 00:07:53.823 04:11:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.823 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.081 04:11:06 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.081 04:11:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.081 04:11:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.081 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.081 ************************************ 00:07:54.081 START TEST app_cmdline 00:07:54.081 ************************************ 00:07:54.081 04:11:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:54.081 * Looking for test storage... 00:07:54.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.081 04:11:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.081 04:11:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.081 04:11:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.081 04:11:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.081 04:11:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.081 04:11:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.081 04:11:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.081 04:11:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.081 04:11:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.081 04:11:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.081 04:11:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.081 04:11:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.081 04:11:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.081 04:11:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.081 04:11:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.081 04:11:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.081 04:11:06 -- scripts/common.sh@344 -- # : 1 00:07:54.081 04:11:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.081 04:11:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.081 04:11:06 -- scripts/common.sh@364 -- # decimal 1 00:07:54.082 04:11:06 -- scripts/common.sh@352 -- # local d=1 00:07:54.082 04:11:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.082 04:11:06 -- scripts/common.sh@354 -- # echo 1 00:07:54.082 04:11:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.082 04:11:06 -- scripts/common.sh@365 -- # decimal 2 00:07:54.082 04:11:06 -- scripts/common.sh@352 -- # local d=2 00:07:54.082 04:11:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.082 04:11:06 -- scripts/common.sh@354 -- # echo 2 00:07:54.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.082 04:11:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.082 04:11:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.082 04:11:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.082 04:11:06 -- scripts/common.sh@367 -- # return 0 00:07:54.082 04:11:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.082 04:11:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.082 --rc genhtml_branch_coverage=1 00:07:54.082 --rc genhtml_function_coverage=1 00:07:54.082 --rc genhtml_legend=1 00:07:54.082 --rc geninfo_all_blocks=1 00:07:54.082 --rc geninfo_unexecuted_blocks=1 00:07:54.082 00:07:54.082 ' 00:07:54.082 04:11:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.082 --rc genhtml_branch_coverage=1 00:07:54.082 --rc genhtml_function_coverage=1 00:07:54.082 --rc genhtml_legend=1 00:07:54.082 --rc geninfo_all_blocks=1 00:07:54.082 --rc geninfo_unexecuted_blocks=1 00:07:54.082 00:07:54.082 ' 00:07:54.082 04:11:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.082 --rc genhtml_branch_coverage=1 00:07:54.082 --rc genhtml_function_coverage=1 00:07:54.082 --rc genhtml_legend=1 00:07:54.082 --rc geninfo_all_blocks=1 00:07:54.082 --rc geninfo_unexecuted_blocks=1 00:07:54.082 00:07:54.082 ' 00:07:54.082 04:11:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.082 --rc genhtml_branch_coverage=1 00:07:54.082 --rc genhtml_function_coverage=1 00:07:54.082 --rc genhtml_legend=1 00:07:54.082 --rc geninfo_all_blocks=1 00:07:54.082 --rc geninfo_unexecuted_blocks=1 00:07:54.082 00:07:54.082 ' 00:07:54.082 04:11:06 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:54.082 04:11:06 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69510 00:07:54.082 04:11:06 -- app/cmdline.sh@18 -- # waitforlisten 69510 00:07:54.082 04:11:06 -- common/autotest_common.sh@829 -- # '[' -z 69510 ']' 00:07:54.082 04:11:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.082 04:11:06 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:54.082 04:11:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.082 04:11:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.082 04:11:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.082 04:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.341 [2024-12-06 04:11:06.669137] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.341 [2024-12-06 04:11:06.669857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69510 ] 00:07:54.341 [2024-12-06 04:11:06.811789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.600 [2024-12-06 04:11:06.910136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.600 [2024-12-06 04:11:06.910506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.167 04:11:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.167 04:11:07 -- common/autotest_common.sh@862 -- # return 0 00:07:55.167 04:11:07 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:55.425 { 00:07:55.425 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:55.425 "fields": { 00:07:55.425 "major": 24, 00:07:55.425 "minor": 1, 00:07:55.425 "patch": 1, 00:07:55.425 "suffix": "-pre", 00:07:55.425 "commit": "c13c99a5e" 00:07:55.425 } 00:07:55.425 } 00:07:55.425 04:11:07 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.425 04:11:07 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.425 04:11:07 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.425 04:11:07 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.425 04:11:07 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.425 04:11:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.425 04:11:07 -- common/autotest_common.sh@10 -- # set +x 00:07:55.425 04:11:07 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.425 04:11:07 -- app/cmdline.sh@26 -- # sort 00:07:55.425 04:11:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.682 04:11:08 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.682 04:11:08 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.683 04:11:08 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.683 04:11:08 -- common/autotest_common.sh@650 -- # local es=0 00:07:55.683 04:11:08 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.683 04:11:08 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.683 04:11:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.683 04:11:08 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.683 04:11:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.683 04:11:08 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.683 04:11:08 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.683 04:11:08 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:55.683 04:11:08 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:55.683 04:11:08 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.941 request: 00:07:55.941 { 00:07:55.941 "method": "env_dpdk_get_mem_stats", 00:07:55.941 "req_id": 1 00:07:55.941 } 00:07:55.941 Got JSON-RPC error response 00:07:55.941 response: 00:07:55.941 { 00:07:55.941 "code": -32601, 00:07:55.941 "message": "Method not found" 00:07:55.941 } 00:07:55.941 04:11:08 -- common/autotest_common.sh@653 -- # es=1 00:07:55.941 04:11:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.941 04:11:08 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.941 04:11:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.941 04:11:08 -- app/cmdline.sh@1 -- # killprocess 69510 00:07:55.941 04:11:08 -- common/autotest_common.sh@936 -- # '[' -z 69510 ']' 00:07:55.941 04:11:08 -- common/autotest_common.sh@940 -- # kill -0 69510 00:07:55.941 04:11:08 -- common/autotest_common.sh@941 -- # uname 00:07:55.941 04:11:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:55.941 04:11:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69510 00:07:55.941 killing process with pid 69510 00:07:55.941 04:11:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:55.941 04:11:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:55.941 04:11:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69510' 00:07:55.941 04:11:08 -- common/autotest_common.sh@955 -- # kill 69510 00:07:55.941 04:11:08 -- common/autotest_common.sh@960 -- # wait 69510 00:07:56.201 ************************************ 00:07:56.201 END TEST app_cmdline 00:07:56.201 ************************************ 00:07:56.201 00:07:56.201 real 0m2.312s 00:07:56.201 user 0m2.894s 00:07:56.201 sys 0m0.514s 00:07:56.201 04:11:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.201 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:56.461 04:11:08 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:56.461 04:11:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.461 04:11:08 -- common/autotest_common.sh@10 -- # set +x 00:07:56.461 ************************************ 00:07:56.461 START TEST version 00:07:56.461 ************************************ 00:07:56.461 04:11:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:56.461 * Looking for test storage... 00:07:56.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.461 04:11:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.461 04:11:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.461 04:11:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.461 04:11:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.461 04:11:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.461 04:11:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.461 04:11:08 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.461 04:11:08 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.461 04:11:08 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.461 04:11:08 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.461 04:11:08 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.461 04:11:08 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.461 04:11:08 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.461 04:11:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.461 04:11:08 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.461 04:11:08 -- scripts/common.sh@344 -- # : 1 00:07:56.461 04:11:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.461 04:11:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.461 04:11:08 -- scripts/common.sh@364 -- # decimal 1 00:07:56.461 04:11:08 -- scripts/common.sh@352 -- # local d=1 00:07:56.461 04:11:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.461 04:11:08 -- scripts/common.sh@354 -- # echo 1 00:07:56.461 04:11:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.461 04:11:08 -- scripts/common.sh@365 -- # decimal 2 00:07:56.461 04:11:08 -- scripts/common.sh@352 -- # local d=2 00:07:56.461 04:11:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.461 04:11:08 -- scripts/common.sh@354 -- # echo 2 00:07:56.461 04:11:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.461 04:11:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.461 04:11:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.461 04:11:08 -- scripts/common.sh@367 -- # return 0 00:07:56.461 04:11:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.461 --rc genhtml_branch_coverage=1 00:07:56.461 --rc genhtml_function_coverage=1 00:07:56.461 --rc genhtml_legend=1 00:07:56.461 --rc geninfo_all_blocks=1 00:07:56.461 --rc geninfo_unexecuted_blocks=1 00:07:56.461 00:07:56.461 ' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.461 --rc genhtml_branch_coverage=1 00:07:56.461 --rc genhtml_function_coverage=1 00:07:56.461 --rc genhtml_legend=1 00:07:56.461 --rc geninfo_all_blocks=1 00:07:56.461 --rc geninfo_unexecuted_blocks=1 00:07:56.461 00:07:56.461 ' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.461 --rc genhtml_branch_coverage=1 00:07:56.461 --rc genhtml_function_coverage=1 00:07:56.461 --rc genhtml_legend=1 00:07:56.461 --rc geninfo_all_blocks=1 00:07:56.461 --rc geninfo_unexecuted_blocks=1 00:07:56.461 00:07:56.461 ' 00:07:56.461 04:11:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.461 --rc genhtml_branch_coverage=1 00:07:56.461 --rc genhtml_function_coverage=1 00:07:56.461 --rc genhtml_legend=1 00:07:56.461 --rc geninfo_all_blocks=1 00:07:56.461 --rc geninfo_unexecuted_blocks=1 00:07:56.461 00:07:56.461 ' 00:07:56.461 04:11:08 -- app/version.sh@17 -- # get_header_version major 00:07:56.461 04:11:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.461 04:11:08 -- app/version.sh@14 -- # cut -f2 00:07:56.461 04:11:08 -- app/version.sh@14 -- # tr -d '"' 00:07:56.461 04:11:08 -- app/version.sh@17 -- # major=24 00:07:56.461 04:11:08 -- app/version.sh@18 -- # get_header_version minor 00:07:56.461 04:11:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.461 04:11:08 -- app/version.sh@14 -- # cut -f2 00:07:56.461 04:11:08 -- app/version.sh@14 -- # tr -d '"' 00:07:56.461 04:11:08 -- app/version.sh@18 -- # minor=1 00:07:56.461 04:11:08 -- app/version.sh@19 -- # get_header_version patch 00:07:56.461 04:11:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.461 04:11:08 -- app/version.sh@14 -- # tr -d '"' 00:07:56.461 04:11:08 -- app/version.sh@14 -- # cut -f2 00:07:56.461 04:11:08 -- app/version.sh@19 -- # patch=1 00:07:56.461 04:11:08 -- app/version.sh@20 -- # get_header_version suffix 00:07:56.461 04:11:08 -- app/version.sh@14 -- # cut -f2 00:07:56.461 04:11:08 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:56.461 04:11:08 -- app/version.sh@14 -- # tr -d '"' 00:07:56.461 04:11:08 -- app/version.sh@20 -- # suffix=-pre 00:07:56.461 04:11:08 -- app/version.sh@22 -- # version=24.1 00:07:56.461 04:11:08 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.461 04:11:08 -- app/version.sh@25 -- # version=24.1.1 00:07:56.461 04:11:08 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:56.461 04:11:08 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:56.461 04:11:08 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.461 04:11:09 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:56.461 04:11:09 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:56.461 ************************************ 00:07:56.461 END TEST version 00:07:56.461 ************************************ 00:07:56.461 00:07:56.461 real 0m0.242s 00:07:56.461 user 0m0.159s 00:07:56.461 sys 0m0.118s 00:07:56.461 04:11:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.461 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.721 04:11:09 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:56.721 04:11:09 -- spdk/autotest.sh@191 -- # uname -s 00:07:56.721 04:11:09 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:56.721 04:11:09 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:56.721 04:11:09 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:07:56.721 04:11:09 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:07:56.721 04:11:09 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:56.721 04:11:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.721 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.721 ************************************ 00:07:56.721 START TEST spdk_dd 00:07:56.721 ************************************ 00:07:56.721 04:11:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:56.721 * Looking for test storage... 00:07:56.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.721 04:11:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:56.721 04:11:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:56.721 04:11:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:56.721 04:11:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:56.721 04:11:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:56.721 04:11:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:56.721 04:11:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:56.721 04:11:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:56.721 04:11:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.721 04:11:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:56.721 04:11:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:56.721 04:11:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:56.721 04:11:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:56.721 04:11:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:56.721 04:11:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:56.721 04:11:09 -- scripts/common.sh@344 -- # : 1 00:07:56.721 04:11:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:56.721 04:11:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.721 04:11:09 -- scripts/common.sh@364 -- # decimal 1 00:07:56.721 04:11:09 -- scripts/common.sh@352 -- # local d=1 00:07:56.721 04:11:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.721 04:11:09 -- scripts/common.sh@354 -- # echo 1 00:07:56.721 04:11:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:56.721 04:11:09 -- scripts/common.sh@365 -- # decimal 2 00:07:56.721 04:11:09 -- scripts/common.sh@352 -- # local d=2 00:07:56.721 04:11:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.721 04:11:09 -- scripts/common.sh@354 -- # echo 2 00:07:56.721 04:11:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:56.721 04:11:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:56.721 04:11:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:56.721 04:11:09 -- scripts/common.sh@367 -- # return 0 00:07:56.721 04:11:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:56.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.721 --rc genhtml_branch_coverage=1 00:07:56.721 --rc genhtml_function_coverage=1 00:07:56.721 --rc genhtml_legend=1 00:07:56.721 --rc geninfo_all_blocks=1 00:07:56.721 --rc geninfo_unexecuted_blocks=1 00:07:56.721 00:07:56.721 ' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:56.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.721 --rc genhtml_branch_coverage=1 00:07:56.721 --rc genhtml_function_coverage=1 00:07:56.721 --rc genhtml_legend=1 00:07:56.721 --rc geninfo_all_blocks=1 00:07:56.721 --rc geninfo_unexecuted_blocks=1 00:07:56.721 00:07:56.721 ' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:56.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.721 --rc genhtml_branch_coverage=1 00:07:56.721 --rc genhtml_function_coverage=1 00:07:56.721 --rc genhtml_legend=1 00:07:56.721 --rc geninfo_all_blocks=1 00:07:56.721 --rc geninfo_unexecuted_blocks=1 00:07:56.721 00:07:56.721 ' 00:07:56.721 04:11:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:56.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.721 --rc genhtml_branch_coverage=1 00:07:56.721 --rc genhtml_function_coverage=1 00:07:56.721 --rc genhtml_legend=1 00:07:56.721 --rc geninfo_all_blocks=1 00:07:56.721 --rc geninfo_unexecuted_blocks=1 00:07:56.721 00:07:56.721 ' 00:07:56.721 04:11:09 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.721 04:11:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.721 04:11:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.721 04:11:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.721 04:11:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.721 04:11:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.721 04:11:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.721 04:11:09 -- paths/export.sh@5 -- # export PATH 00:07:56.721 04:11:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.722 04:11:09 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:57.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.291 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:57.291 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:57.291 04:11:09 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:57.291 04:11:09 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:57.291 04:11:09 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:57.292 04:11:09 -- scripts/common.sh@312 -- # local nvmes 00:07:57.292 04:11:09 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:57.292 04:11:09 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:57.292 04:11:09 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:57.292 04:11:09 -- scripts/common.sh@297 -- # local bdf= 00:07:57.292 04:11:09 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:57.292 04:11:09 -- scripts/common.sh@232 -- # local class 00:07:57.292 04:11:09 -- scripts/common.sh@233 -- # local subclass 00:07:57.292 04:11:09 -- scripts/common.sh@234 -- # local progif 00:07:57.292 04:11:09 -- scripts/common.sh@235 -- # printf %02x 1 00:07:57.292 04:11:09 -- scripts/common.sh@235 -- # class=01 00:07:57.292 04:11:09 -- scripts/common.sh@236 -- # printf %02x 8 00:07:57.292 04:11:09 -- scripts/common.sh@236 -- # subclass=08 00:07:57.292 04:11:09 -- scripts/common.sh@237 -- # printf %02x 2 00:07:57.292 04:11:09 -- scripts/common.sh@237 -- # progif=02 00:07:57.292 04:11:09 -- scripts/common.sh@239 -- # hash lspci 00:07:57.292 04:11:09 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:57.292 04:11:09 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:57.292 04:11:09 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:57.292 04:11:09 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:57.292 04:11:09 -- scripts/common.sh@244 -- # tr -d '"' 00:07:57.292 04:11:09 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:57.292 04:11:09 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:57.292 04:11:09 -- scripts/common.sh@15 -- # local i 00:07:57.292 04:11:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:57.292 04:11:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:57.292 04:11:09 -- scripts/common.sh@24 -- # return 0 00:07:57.292 04:11:09 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:57.292 04:11:09 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:57.292 04:11:09 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:57.292 04:11:09 -- scripts/common.sh@15 -- # local i 00:07:57.292 04:11:09 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:57.292 04:11:09 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:57.292 04:11:09 -- scripts/common.sh@24 -- # return 0 00:07:57.292 04:11:09 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:57.292 04:11:09 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:57.292 04:11:09 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:57.292 04:11:09 -- scripts/common.sh@322 -- # uname -s 00:07:57.292 04:11:09 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:57.292 04:11:09 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:57.292 04:11:09 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:57.292 04:11:09 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:57.292 04:11:09 -- scripts/common.sh@322 -- # uname -s 00:07:57.292 04:11:09 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:57.292 04:11:09 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:57.292 04:11:09 -- scripts/common.sh@327 -- # (( 2 )) 00:07:57.292 04:11:09 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:57.292 04:11:09 -- dd/dd.sh@13 -- # check_liburing 00:07:57.292 04:11:09 -- dd/common.sh@139 -- # local lib so 00:07:57.292 04:11:09 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:57.292 04:11:09 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:57.292 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.292 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:57.293 04:11:09 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:57.293 04:11:09 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:57.293 * spdk_dd linked to liburing 00:07:57.293 04:11:09 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:57.293 04:11:09 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:57.293 04:11:09 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:57.293 04:11:09 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:57.293 04:11:09 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:57.293 04:11:09 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:57.293 04:11:09 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:57.293 04:11:09 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:57.293 04:11:09 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:57.293 04:11:09 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:57.293 04:11:09 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:57.293 04:11:09 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:57.293 04:11:09 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:57.293 04:11:09 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:57.293 04:11:09 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:57.293 04:11:09 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:57.293 04:11:09 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:57.293 04:11:09 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:57.293 04:11:09 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:57.293 04:11:09 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:57.293 04:11:09 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:57.293 04:11:09 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:57.293 04:11:09 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:57.293 04:11:09 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:57.293 04:11:09 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:57.293 04:11:09 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:57.293 04:11:09 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:57.293 04:11:09 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:57.293 04:11:09 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:57.293 04:11:09 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:57.293 04:11:09 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:57.293 04:11:09 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:57.293 04:11:09 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:57.293 04:11:09 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:57.293 04:11:09 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:57.293 04:11:09 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:57.293 04:11:09 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:57.293 04:11:09 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:57.293 04:11:09 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:57.293 04:11:09 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:57.293 04:11:09 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:57.293 04:11:09 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:57.293 04:11:09 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:57.293 04:11:09 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:57.293 04:11:09 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:57.293 04:11:09 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:57.293 04:11:09 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:57.293 04:11:09 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:57.293 04:11:09 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:57.293 04:11:09 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:57.293 04:11:09 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:57.293 04:11:09 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:57.293 04:11:09 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:57.293 04:11:09 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:57.293 04:11:09 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:57.293 04:11:09 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:57.293 04:11:09 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:57.293 04:11:09 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:57.293 04:11:09 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:57.293 04:11:09 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:57.294 04:11:09 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:57.294 04:11:09 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:57.294 04:11:09 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:57.294 04:11:09 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:57.294 04:11:09 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:57.294 04:11:09 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:57.294 04:11:09 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:57.294 04:11:09 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:57.294 04:11:09 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:57.294 04:11:09 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:57.294 04:11:09 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:57.294 04:11:09 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:57.294 04:11:09 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:57.294 04:11:09 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:57.294 04:11:09 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:57.294 04:11:09 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:57.294 04:11:09 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:57.294 04:11:09 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:57.294 04:11:09 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:57.294 04:11:09 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:57.294 04:11:09 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:57.294 04:11:09 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:57.294 04:11:09 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:57.294 04:11:09 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:57.294 04:11:09 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:57.294 04:11:09 -- dd/common.sh@157 -- # return 0 00:07:57.294 04:11:09 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:57.294 04:11:09 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:57.294 04:11:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:57.294 04:11:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.294 04:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:57.294 ************************************ 00:07:57.294 START TEST spdk_dd_basic_rw 00:07:57.294 ************************************ 00:07:57.294 04:11:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:57.554 * Looking for test storage... 00:07:57.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:57.554 04:11:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:57.554 04:11:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:57.554 04:11:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:57.554 04:11:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:57.554 04:11:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:57.554 04:11:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:57.554 04:11:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:57.554 04:11:09 -- scripts/common.sh@335 -- # IFS=.-: 00:07:57.554 04:11:09 -- scripts/common.sh@335 -- # read -ra ver1 00:07:57.554 04:11:09 -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.554 04:11:09 -- scripts/common.sh@336 -- # read -ra ver2 00:07:57.554 04:11:09 -- scripts/common.sh@337 -- # local 'op=<' 00:07:57.554 04:11:09 -- scripts/common.sh@339 -- # ver1_l=2 00:07:57.554 04:11:09 -- scripts/common.sh@340 -- # ver2_l=1 00:07:57.554 04:11:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:57.554 04:11:09 -- scripts/common.sh@343 -- # case "$op" in 00:07:57.554 04:11:09 -- scripts/common.sh@344 -- # : 1 00:07:57.554 04:11:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:57.554 04:11:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.554 04:11:09 -- scripts/common.sh@364 -- # decimal 1 00:07:57.554 04:11:09 -- scripts/common.sh@352 -- # local d=1 00:07:57.554 04:11:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.554 04:11:09 -- scripts/common.sh@354 -- # echo 1 00:07:57.555 04:11:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:57.555 04:11:09 -- scripts/common.sh@365 -- # decimal 2 00:07:57.555 04:11:09 -- scripts/common.sh@352 -- # local d=2 00:07:57.555 04:11:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.555 04:11:09 -- scripts/common.sh@354 -- # echo 2 00:07:57.555 04:11:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:57.555 04:11:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:57.555 04:11:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:57.555 04:11:09 -- scripts/common.sh@367 -- # return 0 00:07:57.555 04:11:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.555 04:11:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.555 --rc genhtml_branch_coverage=1 00:07:57.555 --rc genhtml_function_coverage=1 00:07:57.555 --rc genhtml_legend=1 00:07:57.555 --rc geninfo_all_blocks=1 00:07:57.555 --rc geninfo_unexecuted_blocks=1 00:07:57.555 00:07:57.555 ' 00:07:57.555 04:11:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.555 --rc genhtml_branch_coverage=1 00:07:57.555 --rc genhtml_function_coverage=1 00:07:57.555 --rc genhtml_legend=1 00:07:57.555 --rc geninfo_all_blocks=1 00:07:57.555 --rc geninfo_unexecuted_blocks=1 00:07:57.555 00:07:57.555 ' 00:07:57.555 04:11:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.555 --rc genhtml_branch_coverage=1 00:07:57.555 --rc genhtml_function_coverage=1 00:07:57.555 --rc genhtml_legend=1 00:07:57.555 --rc geninfo_all_blocks=1 00:07:57.555 --rc geninfo_unexecuted_blocks=1 00:07:57.555 00:07:57.555 ' 00:07:57.555 04:11:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.555 --rc genhtml_branch_coverage=1 00:07:57.555 --rc genhtml_function_coverage=1 00:07:57.555 --rc genhtml_legend=1 00:07:57.555 --rc geninfo_all_blocks=1 00:07:57.555 --rc geninfo_unexecuted_blocks=1 00:07:57.555 00:07:57.555 ' 00:07:57.555 04:11:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.555 04:11:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.555 04:11:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.555 04:11:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.555 04:11:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.555 04:11:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.555 04:11:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.555 04:11:10 -- paths/export.sh@5 -- # export PATH 00:07:57.555 04:11:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.555 04:11:10 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:57.555 04:11:10 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:57.555 04:11:10 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:57.555 04:11:10 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:57.555 04:11:10 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:57.555 04:11:10 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:57.555 04:11:10 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:57.555 04:11:10 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.555 04:11:10 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.555 04:11:10 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:57.555 04:11:10 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:57.555 04:11:10 -- dd/common.sh@126 -- # mapfile -t id 00:07:57.555 04:11:10 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:57.817 04:11:10 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 9 Host Read Commands: 2464 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:57.817 04:11:10 -- dd/common.sh@130 -- # lbaf=04 00:07:57.818 04:11:10 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 9 Host Read Commands: 2464 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:57.818 04:11:10 -- dd/common.sh@132 -- # lbaf=4096 00:07:57.818 04:11:10 -- dd/common.sh@134 -- # echo 4096 00:07:57.818 04:11:10 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:57.818 04:11:10 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.818 04:11:10 -- dd/basic_rw.sh@96 -- # : 00:07:57.818 04:11:10 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:57.818 04:11:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:57.818 04:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.818 04:11:10 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.818 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.818 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.818 ************************************ 00:07:57.818 START TEST dd_bs_lt_native_bs 00:07:57.818 ************************************ 00:07:57.818 04:11:10 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.818 04:11:10 -- common/autotest_common.sh@650 -- # local es=0 00:07:57.818 04:11:10 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.818 04:11:10 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.818 04:11:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.818 04:11:10 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.818 04:11:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.818 04:11:10 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.818 04:11:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.818 04:11:10 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:57.818 04:11:10 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:57.818 04:11:10 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:57.818 { 00:07:57.818 "subsystems": [ 00:07:57.818 { 00:07:57.818 "subsystem": "bdev", 00:07:57.818 "config": [ 00:07:57.818 { 00:07:57.818 "params": { 00:07:57.818 "trtype": "pcie", 00:07:57.818 "traddr": "0000:00:06.0", 00:07:57.818 "name": "Nvme0" 00:07:57.818 }, 00:07:57.818 "method": "bdev_nvme_attach_controller" 00:07:57.818 }, 00:07:57.818 { 00:07:57.818 "method": "bdev_wait_for_examine" 00:07:57.818 } 00:07:57.818 ] 00:07:57.818 } 00:07:57.818 ] 00:07:57.818 } 00:07:57.818 [2024-12-06 04:11:10.267676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.818 [2024-12-06 04:11:10.267828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:07:58.077 [2024-12-06 04:11:10.410664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.077 [2024-12-06 04:11:10.510075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.336 [2024-12-06 04:11:10.664527] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:58.336 [2024-12-06 04:11:10.664600] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.336 [2024-12-06 04:11:10.786269] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.336 04:11:10 -- common/autotest_common.sh@653 -- # es=234 00:07:58.336 04:11:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.336 04:11:10 -- common/autotest_common.sh@662 -- # es=106 00:07:58.336 04:11:10 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:58.336 04:11:10 -- common/autotest_common.sh@670 -- # es=1 00:07:58.336 04:11:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.336 00:07:58.336 real 0m0.663s 00:07:58.336 user 0m0.433s 00:07:58.336 sys 0m0.183s 00:07:58.336 04:11:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.336 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:58.336 ************************************ 00:07:58.336 END TEST dd_bs_lt_native_bs 00:07:58.336 ************************************ 00:07:58.595 04:11:10 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:58.595 04:11:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:58.595 04:11:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.595 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:58.595 ************************************ 00:07:58.595 START TEST dd_rw 00:07:58.595 ************************************ 00:07:58.595 04:11:10 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:07:58.595 04:11:10 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:58.595 04:11:10 -- dd/basic_rw.sh@12 -- # local count size 00:07:58.595 04:11:10 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:58.595 04:11:10 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:58.595 04:11:10 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:58.595 04:11:10 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:58.595 04:11:10 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:58.595 04:11:10 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:58.595 04:11:10 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:58.595 04:11:10 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:58.595 04:11:10 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:58.595 04:11:10 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:58.595 04:11:10 -- dd/basic_rw.sh@23 -- # count=15 00:07:58.595 04:11:10 -- dd/basic_rw.sh@24 -- # count=15 00:07:58.595 04:11:10 -- dd/basic_rw.sh@25 -- # size=61440 00:07:58.595 04:11:10 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:58.595 04:11:10 -- dd/common.sh@98 -- # xtrace_disable 00:07:58.595 04:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:59.162 04:11:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:59.162 04:11:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.162 04:11:11 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.162 04:11:11 -- common/autotest_common.sh@10 -- # set +x 00:07:59.162 { 00:07:59.162 "subsystems": [ 00:07:59.162 { 00:07:59.162 "subsystem": "bdev", 00:07:59.162 "config": [ 00:07:59.162 { 00:07:59.162 "params": { 00:07:59.162 "trtype": "pcie", 00:07:59.162 "traddr": "0000:00:06.0", 00:07:59.162 "name": "Nvme0" 00:07:59.162 }, 00:07:59.162 "method": "bdev_nvme_attach_controller" 00:07:59.162 }, 00:07:59.162 { 00:07:59.162 "method": "bdev_wait_for_examine" 00:07:59.162 } 00:07:59.162 ] 00:07:59.162 } 00:07:59.162 ] 00:07:59.162 } 00:07:59.162 [2024-12-06 04:11:11.540869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.162 [2024-12-06 04:11:11.540966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:07:59.162 [2024-12-06 04:11:11.679926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.421 [2024-12-06 04:11:11.770476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.421  [2024-12-06T04:11:12.245Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:59.680 00:07:59.680 04:11:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:59.680 04:11:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:59.680 04:11:12 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.680 04:11:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.680 { 00:07:59.680 "subsystems": [ 00:07:59.680 { 00:07:59.680 "subsystem": "bdev", 00:07:59.680 "config": [ 00:07:59.680 { 00:07:59.680 "params": { 00:07:59.680 "trtype": "pcie", 00:07:59.680 "traddr": "0000:00:06.0", 00:07:59.680 "name": "Nvme0" 00:07:59.680 }, 00:07:59.680 "method": "bdev_nvme_attach_controller" 00:07:59.680 }, 00:07:59.680 { 00:07:59.680 "method": "bdev_wait_for_examine" 00:07:59.680 } 00:07:59.680 ] 00:07:59.680 } 00:07:59.680 ] 00:07:59.680 } 00:07:59.680 [2024-12-06 04:11:12.202933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.680 [2024-12-06 04:11:12.203069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69909 ] 00:07:59.937 [2024-12-06 04:11:12.343320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.937 [2024-12-06 04:11:12.414067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.194  [2024-12-06T04:11:13.016Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:00.451 00:08:00.451 04:11:12 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.451 04:11:12 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:00.451 04:11:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.451 04:11:12 -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.451 04:11:12 -- dd/common.sh@12 -- # local size=61440 00:08:00.451 04:11:12 -- dd/common.sh@14 -- # local bs=1048576 00:08:00.451 04:11:12 -- dd/common.sh@15 -- # local count=1 00:08:00.451 04:11:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.451 04:11:12 -- dd/common.sh@18 -- # gen_conf 00:08:00.451 04:11:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.451 04:11:12 -- common/autotest_common.sh@10 -- # set +x 00:08:00.451 [2024-12-06 04:11:12.829078] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.451 [2024-12-06 04:11:12.829205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69922 ] 00:08:00.451 { 00:08:00.451 "subsystems": [ 00:08:00.451 { 00:08:00.451 "subsystem": "bdev", 00:08:00.451 "config": [ 00:08:00.451 { 00:08:00.451 "params": { 00:08:00.451 "trtype": "pcie", 00:08:00.451 "traddr": "0000:00:06.0", 00:08:00.451 "name": "Nvme0" 00:08:00.451 }, 00:08:00.451 "method": "bdev_nvme_attach_controller" 00:08:00.451 }, 00:08:00.451 { 00:08:00.451 "method": "bdev_wait_for_examine" 00:08:00.451 } 00:08:00.451 ] 00:08:00.451 } 00:08:00.451 ] 00:08:00.451 } 00:08:00.451 [2024-12-06 04:11:12.970660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.708 [2024-12-06 04:11:13.059935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.708  [2024-12-06T04:11:13.532Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:00.967 00:08:00.967 04:11:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:00.967 04:11:13 -- dd/basic_rw.sh@23 -- # count=15 00:08:00.967 04:11:13 -- dd/basic_rw.sh@24 -- # count=15 00:08:00.967 04:11:13 -- dd/basic_rw.sh@25 -- # size=61440 00:08:00.967 04:11:13 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:00.967 04:11:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:00.967 04:11:13 -- common/autotest_common.sh@10 -- # set +x 00:08:01.549 04:11:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:01.549 04:11:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:01.549 04:11:13 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.549 04:11:13 -- common/autotest_common.sh@10 -- # set +x 00:08:01.549 [2024-12-06 04:11:14.011541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:01.549 [2024-12-06 04:11:14.011681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69946 ] 00:08:01.549 { 00:08:01.549 "subsystems": [ 00:08:01.549 { 00:08:01.549 "subsystem": "bdev", 00:08:01.549 "config": [ 00:08:01.549 { 00:08:01.549 "params": { 00:08:01.549 "trtype": "pcie", 00:08:01.549 "traddr": "0000:00:06.0", 00:08:01.549 "name": "Nvme0" 00:08:01.549 }, 00:08:01.549 "method": "bdev_nvme_attach_controller" 00:08:01.549 }, 00:08:01.549 { 00:08:01.549 "method": "bdev_wait_for_examine" 00:08:01.549 } 00:08:01.549 ] 00:08:01.549 } 00:08:01.549 ] 00:08:01.549 } 00:08:01.807 [2024-12-06 04:11:14.145959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.807 [2024-12-06 04:11:14.237244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.065  [2024-12-06T04:11:14.630Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:02.065 00:08:02.065 04:11:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:02.065 04:11:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:02.065 04:11:14 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.065 04:11:14 -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 [2024-12-06 04:11:14.657924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.324 [2024-12-06 04:11:14.658041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:08:02.324 { 00:08:02.324 "subsystems": [ 00:08:02.324 { 00:08:02.324 "subsystem": "bdev", 00:08:02.324 "config": [ 00:08:02.324 { 00:08:02.324 "params": { 00:08:02.324 "trtype": "pcie", 00:08:02.324 "traddr": "0000:00:06.0", 00:08:02.324 "name": "Nvme0" 00:08:02.324 }, 00:08:02.324 "method": "bdev_nvme_attach_controller" 00:08:02.324 }, 00:08:02.324 { 00:08:02.324 "method": "bdev_wait_for_examine" 00:08:02.324 } 00:08:02.324 ] 00:08:02.324 } 00:08:02.324 ] 00:08:02.324 } 00:08:02.324 [2024-12-06 04:11:14.793591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.324 [2024-12-06 04:11:14.886639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.582  [2024-12-06T04:11:15.406Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:02.841 00:08:02.841 04:11:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.841 04:11:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:02.841 04:11:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.841 04:11:15 -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.841 04:11:15 -- dd/common.sh@12 -- # local size=61440 00:08:02.841 04:11:15 -- dd/common.sh@14 -- # local bs=1048576 00:08:02.841 04:11:15 -- dd/common.sh@15 -- # local count=1 00:08:02.841 04:11:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.841 04:11:15 -- dd/common.sh@18 -- # gen_conf 00:08:02.841 04:11:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.841 04:11:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.841 { 00:08:02.841 "subsystems": [ 00:08:02.841 { 00:08:02.841 "subsystem": "bdev", 00:08:02.841 "config": [ 00:08:02.841 { 00:08:02.841 "params": { 00:08:02.841 "trtype": "pcie", 00:08:02.841 "traddr": "0000:00:06.0", 00:08:02.841 "name": "Nvme0" 00:08:02.841 }, 00:08:02.841 "method": "bdev_nvme_attach_controller" 00:08:02.841 }, 00:08:02.841 { 00:08:02.841 "method": "bdev_wait_for_examine" 00:08:02.841 } 00:08:02.841 ] 00:08:02.841 } 00:08:02.841 ] 00:08:02.841 } 00:08:02.841 [2024-12-06 04:11:15.400655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.841 [2024-12-06 04:11:15.400868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69972 ] 00:08:03.099 [2024-12-06 04:11:15.552049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.099 [2024-12-06 04:11:15.646308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.357  [2024-12-06T04:11:16.179Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:03.614 00:08:03.614 04:11:16 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:03.614 04:11:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:03.614 04:11:16 -- dd/basic_rw.sh@23 -- # count=7 00:08:03.614 04:11:16 -- dd/basic_rw.sh@24 -- # count=7 00:08:03.614 04:11:16 -- dd/basic_rw.sh@25 -- # size=57344 00:08:03.614 04:11:16 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:03.614 04:11:16 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.614 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:08:04.182 04:11:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:04.182 04:11:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.182 04:11:16 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.182 04:11:16 -- common/autotest_common.sh@10 -- # set +x 00:08:04.182 [2024-12-06 04:11:16.552086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.182 [2024-12-06 04:11:16.552226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69995 ] 00:08:04.182 { 00:08:04.182 "subsystems": [ 00:08:04.182 { 00:08:04.182 "subsystem": "bdev", 00:08:04.182 "config": [ 00:08:04.182 { 00:08:04.182 "params": { 00:08:04.182 "trtype": "pcie", 00:08:04.182 "traddr": "0000:00:06.0", 00:08:04.182 "name": "Nvme0" 00:08:04.182 }, 00:08:04.182 "method": "bdev_nvme_attach_controller" 00:08:04.182 }, 00:08:04.182 { 00:08:04.182 "method": "bdev_wait_for_examine" 00:08:04.182 } 00:08:04.182 ] 00:08:04.182 } 00:08:04.182 ] 00:08:04.182 } 00:08:04.182 [2024-12-06 04:11:16.691606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.440 [2024-12-06 04:11:16.781459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.440  [2024-12-06T04:11:17.264Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:04.699 00:08:04.699 04:11:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:04.699 04:11:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:04.699 04:11:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:04.699 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:08:04.699 [2024-12-06 04:11:17.202580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.699 [2024-12-06 04:11:17.202714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70008 ] 00:08:04.699 { 00:08:04.699 "subsystems": [ 00:08:04.699 { 00:08:04.699 "subsystem": "bdev", 00:08:04.699 "config": [ 00:08:04.699 { 00:08:04.699 "params": { 00:08:04.699 "trtype": "pcie", 00:08:04.699 "traddr": "0000:00:06.0", 00:08:04.699 "name": "Nvme0" 00:08:04.699 }, 00:08:04.699 "method": "bdev_nvme_attach_controller" 00:08:04.699 }, 00:08:04.699 { 00:08:04.699 "method": "bdev_wait_for_examine" 00:08:04.699 } 00:08:04.699 ] 00:08:04.699 } 00:08:04.699 ] 00:08:04.699 } 00:08:04.958 [2024-12-06 04:11:17.343661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.958 [2024-12-06 04:11:17.437055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.217  [2024-12-06T04:11:18.041Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:05.476 00:08:05.476 04:11:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.476 04:11:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:05.476 04:11:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.476 04:11:17 -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.476 04:11:17 -- dd/common.sh@12 -- # local size=57344 00:08:05.476 04:11:17 -- dd/common.sh@14 -- # local bs=1048576 00:08:05.476 04:11:17 -- dd/common.sh@15 -- # local count=1 00:08:05.476 04:11:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.476 04:11:17 -- dd/common.sh@18 -- # gen_conf 00:08:05.476 04:11:17 -- dd/common.sh@31 -- # xtrace_disable 00:08:05.476 04:11:17 -- common/autotest_common.sh@10 -- # set +x 00:08:05.476 [2024-12-06 04:11:17.869334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.476 [2024-12-06 04:11:17.869480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70021 ] 00:08:05.476 { 00:08:05.476 "subsystems": [ 00:08:05.476 { 00:08:05.476 "subsystem": "bdev", 00:08:05.476 "config": [ 00:08:05.476 { 00:08:05.476 "params": { 00:08:05.476 "trtype": "pcie", 00:08:05.476 "traddr": "0000:00:06.0", 00:08:05.476 "name": "Nvme0" 00:08:05.476 }, 00:08:05.476 "method": "bdev_nvme_attach_controller" 00:08:05.476 }, 00:08:05.476 { 00:08:05.476 "method": "bdev_wait_for_examine" 00:08:05.476 } 00:08:05.476 ] 00:08:05.476 } 00:08:05.476 ] 00:08:05.476 } 00:08:05.476 [2024-12-06 04:11:18.010197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.736 [2024-12-06 04:11:18.098811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.736  [2024-12-06T04:11:18.559Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:05.994 00:08:05.994 04:11:18 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:05.994 04:11:18 -- dd/basic_rw.sh@23 -- # count=7 00:08:05.994 04:11:18 -- dd/basic_rw.sh@24 -- # count=7 00:08:05.994 04:11:18 -- dd/basic_rw.sh@25 -- # size=57344 00:08:05.994 04:11:18 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:05.994 04:11:18 -- dd/common.sh@98 -- # xtrace_disable 00:08:05.994 04:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:06.564 04:11:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:06.564 04:11:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.564 04:11:18 -- dd/common.sh@31 -- # xtrace_disable 00:08:06.564 04:11:18 -- common/autotest_common.sh@10 -- # set +x 00:08:06.564 [2024-12-06 04:11:19.019729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.564 [2024-12-06 04:11:19.019873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70039 ] 00:08:06.564 { 00:08:06.564 "subsystems": [ 00:08:06.564 { 00:08:06.564 "subsystem": "bdev", 00:08:06.564 "config": [ 00:08:06.564 { 00:08:06.564 "params": { 00:08:06.564 "trtype": "pcie", 00:08:06.564 "traddr": "0000:00:06.0", 00:08:06.564 "name": "Nvme0" 00:08:06.564 }, 00:08:06.564 "method": "bdev_nvme_attach_controller" 00:08:06.564 }, 00:08:06.564 { 00:08:06.564 "method": "bdev_wait_for_examine" 00:08:06.564 } 00:08:06.564 ] 00:08:06.564 } 00:08:06.564 ] 00:08:06.564 } 00:08:06.823 [2024-12-06 04:11:19.160029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.823 [2024-12-06 04:11:19.251671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.083  [2024-12-06T04:11:19.648Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:07.083 00:08:07.083 04:11:19 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:07.083 04:11:19 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:07.083 04:11:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.083 04:11:19 -- common/autotest_common.sh@10 -- # set +x 00:08:07.342 [2024-12-06 04:11:19.677422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.342 [2024-12-06 04:11:19.677527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70057 ] 00:08:07.342 { 00:08:07.342 "subsystems": [ 00:08:07.342 { 00:08:07.342 "subsystem": "bdev", 00:08:07.342 "config": [ 00:08:07.342 { 00:08:07.342 "params": { 00:08:07.342 "trtype": "pcie", 00:08:07.342 "traddr": "0000:00:06.0", 00:08:07.342 "name": "Nvme0" 00:08:07.342 }, 00:08:07.342 "method": "bdev_nvme_attach_controller" 00:08:07.342 }, 00:08:07.342 { 00:08:07.342 "method": "bdev_wait_for_examine" 00:08:07.342 } 00:08:07.342 ] 00:08:07.342 } 00:08:07.342 ] 00:08:07.342 } 00:08:07.342 [2024-12-06 04:11:19.813617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.342 [2024-12-06 04:11:19.897212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.602  [2024-12-06T04:11:20.426Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:07.861 00:08:07.861 04:11:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.861 04:11:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:07.861 04:11:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.861 04:11:20 -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.861 04:11:20 -- dd/common.sh@12 -- # local size=57344 00:08:07.861 04:11:20 -- dd/common.sh@14 -- # local bs=1048576 00:08:07.861 04:11:20 -- dd/common.sh@15 -- # local count=1 00:08:07.861 04:11:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.861 04:11:20 -- dd/common.sh@18 -- # gen_conf 00:08:07.861 04:11:20 -- dd/common.sh@31 -- # xtrace_disable 00:08:07.861 04:11:20 -- common/autotest_common.sh@10 -- # set +x 00:08:07.861 [2024-12-06 04:11:20.327117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.861 [2024-12-06 04:11:20.327769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70071 ] 00:08:07.861 { 00:08:07.861 "subsystems": [ 00:08:07.861 { 00:08:07.861 "subsystem": "bdev", 00:08:07.861 "config": [ 00:08:07.861 { 00:08:07.861 "params": { 00:08:07.861 "trtype": "pcie", 00:08:07.861 "traddr": "0000:00:06.0", 00:08:07.861 "name": "Nvme0" 00:08:07.861 }, 00:08:07.861 "method": "bdev_nvme_attach_controller" 00:08:07.861 }, 00:08:07.861 { 00:08:07.861 "method": "bdev_wait_for_examine" 00:08:07.861 } 00:08:07.861 ] 00:08:07.861 } 00:08:07.861 ] 00:08:07.861 } 00:08:08.120 [2024-12-06 04:11:20.468054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.120 [2024-12-06 04:11:20.559081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.380  [2024-12-06T04:11:20.945Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:08.380 00:08:08.380 04:11:20 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:08.380 04:11:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:08.380 04:11:20 -- dd/basic_rw.sh@23 -- # count=3 00:08:08.380 04:11:20 -- dd/basic_rw.sh@24 -- # count=3 00:08:08.380 04:11:20 -- dd/basic_rw.sh@25 -- # size=49152 00:08:08.380 04:11:20 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:08.380 04:11:20 -- dd/common.sh@98 -- # xtrace_disable 00:08:08.380 04:11:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.948 04:11:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:08.948 04:11:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.948 04:11:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:08.948 04:11:21 -- common/autotest_common.sh@10 -- # set +x 00:08:08.948 [2024-12-06 04:11:21.424865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.948 [2024-12-06 04:11:21.424973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70089 ] 00:08:08.948 { 00:08:08.948 "subsystems": [ 00:08:08.948 { 00:08:08.948 "subsystem": "bdev", 00:08:08.948 "config": [ 00:08:08.948 { 00:08:08.948 "params": { 00:08:08.948 "trtype": "pcie", 00:08:08.948 "traddr": "0000:00:06.0", 00:08:08.948 "name": "Nvme0" 00:08:08.948 }, 00:08:08.948 "method": "bdev_nvme_attach_controller" 00:08:08.948 }, 00:08:08.948 { 00:08:08.948 "method": "bdev_wait_for_examine" 00:08:08.948 } 00:08:08.948 ] 00:08:08.948 } 00:08:08.948 ] 00:08:08.948 } 00:08:09.211 [2024-12-06 04:11:21.565518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.211 [2024-12-06 04:11:21.656193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.471  [2024-12-06T04:11:22.036Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:09.471 00:08:09.731 04:11:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:09.731 04:11:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:09.731 04:11:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:09.731 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:08:09.731 [2024-12-06 04:11:22.084315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.731 [2024-12-06 04:11:22.084438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70101 ] 00:08:09.731 { 00:08:09.731 "subsystems": [ 00:08:09.731 { 00:08:09.731 "subsystem": "bdev", 00:08:09.731 "config": [ 00:08:09.731 { 00:08:09.731 "params": { 00:08:09.731 "trtype": "pcie", 00:08:09.731 "traddr": "0000:00:06.0", 00:08:09.731 "name": "Nvme0" 00:08:09.731 }, 00:08:09.731 "method": "bdev_nvme_attach_controller" 00:08:09.731 }, 00:08:09.731 { 00:08:09.731 "method": "bdev_wait_for_examine" 00:08:09.731 } 00:08:09.731 ] 00:08:09.731 } 00:08:09.731 ] 00:08:09.731 } 00:08:09.731 [2024-12-06 04:11:22.220768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.989 [2024-12-06 04:11:22.306070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.989  [2024-12-06T04:11:22.813Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:10.248 00:08:10.248 04:11:22 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.248 04:11:22 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:10.248 04:11:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:10.248 04:11:22 -- dd/common.sh@11 -- # local nvme_ref= 00:08:10.248 04:11:22 -- dd/common.sh@12 -- # local size=49152 00:08:10.248 04:11:22 -- dd/common.sh@14 -- # local bs=1048576 00:08:10.248 04:11:22 -- dd/common.sh@15 -- # local count=1 00:08:10.248 04:11:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:10.248 04:11:22 -- dd/common.sh@18 -- # gen_conf 00:08:10.248 04:11:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:10.248 04:11:22 -- common/autotest_common.sh@10 -- # set +x 00:08:10.248 [2024-12-06 04:11:22.752472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.248 [2024-12-06 04:11:22.752575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70115 ] 00:08:10.248 { 00:08:10.248 "subsystems": [ 00:08:10.248 { 00:08:10.248 "subsystem": "bdev", 00:08:10.248 "config": [ 00:08:10.248 { 00:08:10.248 "params": { 00:08:10.248 "trtype": "pcie", 00:08:10.248 "traddr": "0000:00:06.0", 00:08:10.248 "name": "Nvme0" 00:08:10.248 }, 00:08:10.248 "method": "bdev_nvme_attach_controller" 00:08:10.248 }, 00:08:10.248 { 00:08:10.248 "method": "bdev_wait_for_examine" 00:08:10.248 } 00:08:10.248 ] 00:08:10.248 } 00:08:10.248 ] 00:08:10.248 } 00:08:10.508 [2024-12-06 04:11:22.895061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.508 [2024-12-06 04:11:22.989807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.766  [2024-12-06T04:11:23.591Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.026 00:08:11.026 04:11:23 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:11.026 04:11:23 -- dd/basic_rw.sh@23 -- # count=3 00:08:11.026 04:11:23 -- dd/basic_rw.sh@24 -- # count=3 00:08:11.026 04:11:23 -- dd/basic_rw.sh@25 -- # size=49152 00:08:11.026 04:11:23 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:11.026 04:11:23 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.026 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:08:11.326 04:11:23 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:11.326 04:11:23 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:11.326 04:11:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:11.326 04:11:23 -- common/autotest_common.sh@10 -- # set +x 00:08:11.326 [2024-12-06 04:11:23.851003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.326 [2024-12-06 04:11:23.851100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70137 ] 00:08:11.326 { 00:08:11.326 "subsystems": [ 00:08:11.326 { 00:08:11.326 "subsystem": "bdev", 00:08:11.326 "config": [ 00:08:11.326 { 00:08:11.326 "params": { 00:08:11.326 "trtype": "pcie", 00:08:11.326 "traddr": "0000:00:06.0", 00:08:11.326 "name": "Nvme0" 00:08:11.326 }, 00:08:11.326 "method": "bdev_nvme_attach_controller" 00:08:11.326 }, 00:08:11.326 { 00:08:11.326 "method": "bdev_wait_for_examine" 00:08:11.326 } 00:08:11.326 ] 00:08:11.326 } 00:08:11.326 ] 00:08:11.326 } 00:08:11.585 [2024-12-06 04:11:23.993608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.585 [2024-12-06 04:11:24.084965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.844  [2024-12-06T04:11:24.668Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:12.103 00:08:12.103 04:11:24 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:12.103 04:11:24 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.103 04:11:24 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.103 04:11:24 -- common/autotest_common.sh@10 -- # set +x 00:08:12.103 [2024-12-06 04:11:24.532543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.103 [2024-12-06 04:11:24.532639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70151 ] 00:08:12.103 { 00:08:12.103 "subsystems": [ 00:08:12.103 { 00:08:12.103 "subsystem": "bdev", 00:08:12.103 "config": [ 00:08:12.103 { 00:08:12.103 "params": { 00:08:12.103 "trtype": "pcie", 00:08:12.103 "traddr": "0000:00:06.0", 00:08:12.103 "name": "Nvme0" 00:08:12.103 }, 00:08:12.103 "method": "bdev_nvme_attach_controller" 00:08:12.103 }, 00:08:12.103 { 00:08:12.103 "method": "bdev_wait_for_examine" 00:08:12.103 } 00:08:12.103 ] 00:08:12.103 } 00:08:12.103 ] 00:08:12.103 } 00:08:12.363 [2024-12-06 04:11:24.672011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.363 [2024-12-06 04:11:24.733356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.363  [2024-12-06T04:11:25.188Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:12.623 00:08:12.623 04:11:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.623 04:11:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:12.623 04:11:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:12.623 04:11:25 -- dd/common.sh@11 -- # local nvme_ref= 00:08:12.623 04:11:25 -- dd/common.sh@12 -- # local size=49152 00:08:12.623 04:11:25 -- dd/common.sh@14 -- # local bs=1048576 00:08:12.623 04:11:25 -- dd/common.sh@15 -- # local count=1 00:08:12.623 04:11:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:12.623 04:11:25 -- dd/common.sh@18 -- # gen_conf 00:08:12.623 04:11:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:12.623 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.623 [2024-12-06 04:11:25.134151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.623 [2024-12-06 04:11:25.134242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70164 ] 00:08:12.623 { 00:08:12.623 "subsystems": [ 00:08:12.623 { 00:08:12.623 "subsystem": "bdev", 00:08:12.623 "config": [ 00:08:12.623 { 00:08:12.623 "params": { 00:08:12.623 "trtype": "pcie", 00:08:12.623 "traddr": "0000:00:06.0", 00:08:12.623 "name": "Nvme0" 00:08:12.623 }, 00:08:12.623 "method": "bdev_nvme_attach_controller" 00:08:12.623 }, 00:08:12.623 { 00:08:12.623 "method": "bdev_wait_for_examine" 00:08:12.623 } 00:08:12.623 ] 00:08:12.623 } 00:08:12.623 ] 00:08:12.623 } 00:08:12.881 [2024-12-06 04:11:25.271077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.881 [2024-12-06 04:11:25.351582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.139  [2024-12-06T04:11:25.963Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.398 00:08:13.398 00:08:13.398 real 0m14.792s 00:08:13.398 user 0m10.551s 00:08:13.398 sys 0m3.170s 00:08:13.398 04:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.398 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.398 ************************************ 00:08:13.398 END TEST dd_rw 00:08:13.398 ************************************ 00:08:13.398 04:11:25 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:13.398 04:11:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.398 04:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.398 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.398 ************************************ 00:08:13.398 START TEST dd_rw_offset 00:08:13.398 ************************************ 00:08:13.398 04:11:25 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:13.398 04:11:25 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:13.398 04:11:25 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:13.398 04:11:25 -- dd/common.sh@98 -- # xtrace_disable 00:08:13.398 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.398 04:11:25 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:13.398 04:11:25 -- dd/basic_rw.sh@56 -- # data=64w03yoy5kg2g0f3nfoqm6d8knm3cmou95g4994qskog606pqh03kvvr83e7m4pv2165wwmc62jm4234prthv9lvjblqx1iczjnj79fz8a9j3wxo4l0xritpqnm8chcryfa7i01px7zs55i93rlufxkttsjfwqzn15lbwvtbnezoetvfer8h2jtc2xoyq6tv0nmj1xkyeqy46cwxeel9rlfrxm1ika8aipo0rj5wcz4zrgz1cnod34e3u4ehb6xkpn7nwx7h2zjmetwpv7h5d00irp8rfneazof5n74hp3phlhy21v7f4reoc9egm52uu3ang63lqcmeworg6uq16f6xrv49f15lfsfzhs8ui3g2to02k5lof45z3atwrjht0dk2qs0ayioq6eem08btgbufguracw5ufsg8u2zkr9xszw44jt9upl04vzzowvgcekq0bs4yfuebgdb48xv9fayv75fv5sfnj57n2iui923evwol84zid3w2iqc4zkijlusigdrpdaa9bxnqjylb1lkflq4z1imruu2uzwz7pvh0njv4msar2q80tzlambk8jm7h9xyad0oaq742qd5h6660mtgzalzrktjvrlclgathzkb87brk3iqzua389f92t0bc4xzvebv2wnfoo9dx5wk3pkdfaakq0xn50ohur99huitmgcj7ihovfjsrm820zs350yphlj9xps3ws1hp3egg5ojmr05ppfdijm4khhltd1i04sr6652kvnpb0en6wltr3px1co36pehkuz2ktqkezebe5bktqo6ti1x4yr9efooe4mxie3pr67m9ecf8fsum73za47swlylqxkgyrbc4e3a2oi3qjuaig3y7n0fvajg1y23i5ozj64fqqxx7u179744oxs52j9fbo47yqqbmdy1xlandv49m1355pjkxciikrhs22liu31afyt8rartnn6jbi5q8b0tl8purjln6npeol9y3jaront64xqhhuzqeqxg1vv2qkkevvs4hnesuezf8ss1hodugtcp04gb2l7jroudbtrphb86bzzoikizzj6i8bu29xpir6zp1nmey29wbtpaxuoglk0mdzg2h674aj02n5l3vkcbna5kvc9pze8qeo8likuwt7nptc6uirgmvek1zyxxbxrfo8oty5n1st372h00m13ne71r258oo2teb72xo55rkyvczxnfonhoi2n8zyul68ep58ks7zwreml4cfoxb4ydcnydwcw059jaswcx6sxvwrmnvkvr7b9p7oee6nv0gv3nhvm9ie8gscek2del9tq6tli8ali0iy0l9rwa2sqirejjoeas9l2846enma57ohxxe4mbkeyhzkygy4288dhpk6ft3v3g7o7jzdeivvnbsp33ykwuhx3r42aa1jqgdw29hh3ilhlhgcjvgyfworcox2skilmepi5p1n9o3iyhstlde664eilbvj8wy8ludphgwhe2fsvd3vbgt1z8cc3nbpc8rcfhfbkzas6e1t5hyrkxxbreb5i2y7aj6c50j8erykg1gp7ss93hu6fvuiqlak3vct1oxktuto75y8kq4fsxjsi0o9awddwwdqanoqa4kg3c6qudbg2vwkrg2aov4wx2dt1r0m38ipc6e489lqvscn2cfurnt2crgb18tfxq72apro4t2hs3mxpfsg7cd6qihh4je7hcfn67j8tcgr7ulrzp5gefls9cw0dma71oq6cwo87oge12kwut2poeebjpwaw3ip8z6bp85yf9ygj7a9xyh650zskmmv9b288isi5e8rvlhjssgxbo9a4mkeoets7zebfw99m81k9kkidk9w48ubtw4zvdg249nm3pp7zxpmzraya64y7q713cwmkwhqcvyt6zzuhtbu6ysoxh2bjxximbd4mwxa6n77tvz6g78cbco8lqe5jz5oq3go24gfpx8frnppakxhc9mn9ub8ih5wfd0cvxtw9bf906564ijqe1d7wuzgos5uzg62npyje3u9ddy90g7t7j6w56d5w41pijfophyzxbz9udw1cpp9vareniu7pshqncpikz6bu9fmcr2m0tgahu4cudnskd8nn4lcj4qfayv4d7oxfun98lya7dcjtuja59identfo6tv5e90qjd2i5w0nyqz32trqci0e3ngxrb2jponir10c9a8fzy969ak50mi8kr0117xwuho1w5v2dvyi3b9hg1epe5wh666u2npnrzniejjgz1n33no4s4y1inn3kjj1udcrsub4by9zsbye8w5b9uglctpfl6xcr6qr16600ejitw29ss4njdnzx3yg3yh3cm6hfm6w9pxfshf0vc1nh99xszjscb4o4h7bd2cfae75hix1woa05d7ritak3hwm0kp5gvsw3zzx1hd5ma9agadcp8th1k8kjxpiop85iy77tkd64a1s89nh5lfortf0nnpirrn4svwgshac5z8itwgmn3r22fp38d66qy7lrqhewho5krvhcathgzek3xdy15qhw33e25vj0vl2sc79qh64373c7yqa5m4ql12b9i8jqns78xgiilk82810pv4m5fmsu9udlugryhrkr3yaw0qfib12hmnjt4ssmg524inn9vydk1z8uwk1ayij6i2p8gqed164omxnf1lkoc3xfwgvr8gvtaxq72idjrurvnmaawki0wr1m35adnasjihvty4w6uj5337vgpkyxymupnrnvzfoj2exgqi6wb40v2rxv7bcynsh5h7xsnp3e8kuii8btspj3whe4p6ojbjmlbu8mun86o4o9gl7p07ag9my2d4xq8tqj6olx26a3r16385g6j7dorcrmpoka3brn42nn4uqgeecirq5rju8xc45t2c9drfid6fjhj62pmbw3t2nqif6zopk5awrppv37diqtj1n3w4hnfs8tlb7e264nuj0isyvy0e7s36f4otrvkgj4yku3c5808w8i4syl2t9x6yqtazzdn84fguie2pa4d09mur53pgqo2tbgagmpjz85yz1cky4m371odzw7n1txfdnjxfcbx0osa6n2d9djsomru7ku1ui7hfpoc1xpk4hmbdpoywhvn4whxe940sygwxiyr3txlxwlr31tceszekjn4u2kmosvm2kv0wv23bu5rnqxia8aqkc62csrqxtqej9hbuhe3qv3odxyqtxzs8yiekm32iwxuwswacmww10ik18hs67065kn185j4qdl131jndslpbhx2e85pnqazb6vzbj1pf1g9fd6ezqos4p27pwtku64gaoa8i7wwql8up4i879348110uqhqo6q3mpioahwg3oiyglgzf7cxa9zkd6s9hln7w2xc4hkndx2i02zsukgyk9ti6bkcmo92kat8i7uxxokurkx7gljofbxrx2csslhqqwh7o21hstd02opxjbdsfirqfma3g9c0lqe6n6a4sffg1qphwppcy9onxem2nghc1gadtzo7xxh9z7r13tbtxpprqkq7v12b507gm7ph63qvyzskq9nsfqhvi66foz3g5fzw3z6xsce0r3eiwv1p9ilqis002955qc8yszjolwyv17xv8lp70naysfqcfic56nvmsc1kyur0ezzai97d5t9myjv462fi4i2sz3gjptdchxkxyzfkg8co56fl6xx43i6bo2b3veogejzycvunfi4vx1b6ql3g5xq8ok6g5vx7d43mhqrlykmgbjdspac4hwctx9l9yri68uffbnqupsdrc9ukrg842xz1v8pnecqftfa6wkx2o0ipfoo2o78fo0zjztv07etw61pwlq1onumte271284laupj6d888xgz6ojaik2zysx3mc54hkaklnen3jdya9rtx7ifzrga5y1rde07ewpcxnywpwltgd0w8gys4t7w951rl0ewzswa76s2zwdpxfiwhrxgtqzl4i227h8maaczlawfnyv363lw6xpnjeuipea69ydia5uy6wfnmfb471rtkl7hzxh4a46bwwtc330gic5sdo40ywo3pua64soy78jkeq8l6k3wxbi5 00:08:13.398 04:11:25 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:13.398 04:11:25 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:13.398 04:11:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:13.398 04:11:25 -- common/autotest_common.sh@10 -- # set +x 00:08:13.398 [2024-12-06 04:11:25.863181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.398 [2024-12-06 04:11:25.863302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:08:13.398 { 00:08:13.398 "subsystems": [ 00:08:13.398 { 00:08:13.398 "subsystem": "bdev", 00:08:13.398 "config": [ 00:08:13.398 { 00:08:13.398 "params": { 00:08:13.398 "trtype": "pcie", 00:08:13.398 "traddr": "0000:00:06.0", 00:08:13.398 "name": "Nvme0" 00:08:13.398 }, 00:08:13.398 "method": "bdev_nvme_attach_controller" 00:08:13.398 }, 00:08:13.398 { 00:08:13.398 "method": "bdev_wait_for_examine" 00:08:13.398 } 00:08:13.398 ] 00:08:13.398 } 00:08:13.398 ] 00:08:13.398 } 00:08:13.656 [2024-12-06 04:11:26.005083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.656 [2024-12-06 04:11:26.103772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.916  [2024-12-06T04:11:26.740Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:14.175 00:08:14.175 04:11:26 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:14.175 04:11:26 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:14.175 04:11:26 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.175 04:11:26 -- common/autotest_common.sh@10 -- # set +x 00:08:14.175 [2024-12-06 04:11:26.536903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.175 [2024-12-06 04:11:26.537020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70212 ] 00:08:14.175 { 00:08:14.175 "subsystems": [ 00:08:14.175 { 00:08:14.176 "subsystem": "bdev", 00:08:14.176 "config": [ 00:08:14.176 { 00:08:14.176 "params": { 00:08:14.176 "trtype": "pcie", 00:08:14.176 "traddr": "0000:00:06.0", 00:08:14.176 "name": "Nvme0" 00:08:14.176 }, 00:08:14.176 "method": "bdev_nvme_attach_controller" 00:08:14.176 }, 00:08:14.176 { 00:08:14.176 "method": "bdev_wait_for_examine" 00:08:14.176 } 00:08:14.176 ] 00:08:14.176 } 00:08:14.176 ] 00:08:14.176 } 00:08:14.176 [2024-12-06 04:11:26.677520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.434 [2024-12-06 04:11:26.771267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.434  [2024-12-06T04:11:27.259Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:14.694 00:08:14.694 04:11:27 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:14.694 ************************************ 00:08:14.694 END TEST dd_rw_offset 00:08:14.694 ************************************ 00:08:14.695 04:11:27 -- dd/basic_rw.sh@72 -- # [[ 64w03yoy5kg2g0f3nfoqm6d8knm3cmou95g4994qskog606pqh03kvvr83e7m4pv2165wwmc62jm4234prthv9lvjblqx1iczjnj79fz8a9j3wxo4l0xritpqnm8chcryfa7i01px7zs55i93rlufxkttsjfwqzn15lbwvtbnezoetvfer8h2jtc2xoyq6tv0nmj1xkyeqy46cwxeel9rlfrxm1ika8aipo0rj5wcz4zrgz1cnod34e3u4ehb6xkpn7nwx7h2zjmetwpv7h5d00irp8rfneazof5n74hp3phlhy21v7f4reoc9egm52uu3ang63lqcmeworg6uq16f6xrv49f15lfsfzhs8ui3g2to02k5lof45z3atwrjht0dk2qs0ayioq6eem08btgbufguracw5ufsg8u2zkr9xszw44jt9upl04vzzowvgcekq0bs4yfuebgdb48xv9fayv75fv5sfnj57n2iui923evwol84zid3w2iqc4zkijlusigdrpdaa9bxnqjylb1lkflq4z1imruu2uzwz7pvh0njv4msar2q80tzlambk8jm7h9xyad0oaq742qd5h6660mtgzalzrktjvrlclgathzkb87brk3iqzua389f92t0bc4xzvebv2wnfoo9dx5wk3pkdfaakq0xn50ohur99huitmgcj7ihovfjsrm820zs350yphlj9xps3ws1hp3egg5ojmr05ppfdijm4khhltd1i04sr6652kvnpb0en6wltr3px1co36pehkuz2ktqkezebe5bktqo6ti1x4yr9efooe4mxie3pr67m9ecf8fsum73za47swlylqxkgyrbc4e3a2oi3qjuaig3y7n0fvajg1y23i5ozj64fqqxx7u179744oxs52j9fbo47yqqbmdy1xlandv49m1355pjkxciikrhs22liu31afyt8rartnn6jbi5q8b0tl8purjln6npeol9y3jaront64xqhhuzqeqxg1vv2qkkevvs4hnesuezf8ss1hodugtcp04gb2l7jroudbtrphb86bzzoikizzj6i8bu29xpir6zp1nmey29wbtpaxuoglk0mdzg2h674aj02n5l3vkcbna5kvc9pze8qeo8likuwt7nptc6uirgmvek1zyxxbxrfo8oty5n1st372h00m13ne71r258oo2teb72xo55rkyvczxnfonhoi2n8zyul68ep58ks7zwreml4cfoxb4ydcnydwcw059jaswcx6sxvwrmnvkvr7b9p7oee6nv0gv3nhvm9ie8gscek2del9tq6tli8ali0iy0l9rwa2sqirejjoeas9l2846enma57ohxxe4mbkeyhzkygy4288dhpk6ft3v3g7o7jzdeivvnbsp33ykwuhx3r42aa1jqgdw29hh3ilhlhgcjvgyfworcox2skilmepi5p1n9o3iyhstlde664eilbvj8wy8ludphgwhe2fsvd3vbgt1z8cc3nbpc8rcfhfbkzas6e1t5hyrkxxbreb5i2y7aj6c50j8erykg1gp7ss93hu6fvuiqlak3vct1oxktuto75y8kq4fsxjsi0o9awddwwdqanoqa4kg3c6qudbg2vwkrg2aov4wx2dt1r0m38ipc6e489lqvscn2cfurnt2crgb18tfxq72apro4t2hs3mxpfsg7cd6qihh4je7hcfn67j8tcgr7ulrzp5gefls9cw0dma71oq6cwo87oge12kwut2poeebjpwaw3ip8z6bp85yf9ygj7a9xyh650zskmmv9b288isi5e8rvlhjssgxbo9a4mkeoets7zebfw99m81k9kkidk9w48ubtw4zvdg249nm3pp7zxpmzraya64y7q713cwmkwhqcvyt6zzuhtbu6ysoxh2bjxximbd4mwxa6n77tvz6g78cbco8lqe5jz5oq3go24gfpx8frnppakxhc9mn9ub8ih5wfd0cvxtw9bf906564ijqe1d7wuzgos5uzg62npyje3u9ddy90g7t7j6w56d5w41pijfophyzxbz9udw1cpp9vareniu7pshqncpikz6bu9fmcr2m0tgahu4cudnskd8nn4lcj4qfayv4d7oxfun98lya7dcjtuja59identfo6tv5e90qjd2i5w0nyqz32trqci0e3ngxrb2jponir10c9a8fzy969ak50mi8kr0117xwuho1w5v2dvyi3b9hg1epe5wh666u2npnrzniejjgz1n33no4s4y1inn3kjj1udcrsub4by9zsbye8w5b9uglctpfl6xcr6qr16600ejitw29ss4njdnzx3yg3yh3cm6hfm6w9pxfshf0vc1nh99xszjscb4o4h7bd2cfae75hix1woa05d7ritak3hwm0kp5gvsw3zzx1hd5ma9agadcp8th1k8kjxpiop85iy77tkd64a1s89nh5lfortf0nnpirrn4svwgshac5z8itwgmn3r22fp38d66qy7lrqhewho5krvhcathgzek3xdy15qhw33e25vj0vl2sc79qh64373c7yqa5m4ql12b9i8jqns78xgiilk82810pv4m5fmsu9udlugryhrkr3yaw0qfib12hmnjt4ssmg524inn9vydk1z8uwk1ayij6i2p8gqed164omxnf1lkoc3xfwgvr8gvtaxq72idjrurvnmaawki0wr1m35adnasjihvty4w6uj5337vgpkyxymupnrnvzfoj2exgqi6wb40v2rxv7bcynsh5h7xsnp3e8kuii8btspj3whe4p6ojbjmlbu8mun86o4o9gl7p07ag9my2d4xq8tqj6olx26a3r16385g6j7dorcrmpoka3brn42nn4uqgeecirq5rju8xc45t2c9drfid6fjhj62pmbw3t2nqif6zopk5awrppv37diqtj1n3w4hnfs8tlb7e264nuj0isyvy0e7s36f4otrvkgj4yku3c5808w8i4syl2t9x6yqtazzdn84fguie2pa4d09mur53pgqo2tbgagmpjz85yz1cky4m371odzw7n1txfdnjxfcbx0osa6n2d9djsomru7ku1ui7hfpoc1xpk4hmbdpoywhvn4whxe940sygwxiyr3txlxwlr31tceszekjn4u2kmosvm2kv0wv23bu5rnqxia8aqkc62csrqxtqej9hbuhe3qv3odxyqtxzs8yiekm32iwxuwswacmww10ik18hs67065kn185j4qdl131jndslpbhx2e85pnqazb6vzbj1pf1g9fd6ezqos4p27pwtku64gaoa8i7wwql8up4i879348110uqhqo6q3mpioahwg3oiyglgzf7cxa9zkd6s9hln7w2xc4hkndx2i02zsukgyk9ti6bkcmo92kat8i7uxxokurkx7gljofbxrx2csslhqqwh7o21hstd02opxjbdsfirqfma3g9c0lqe6n6a4sffg1qphwppcy9onxem2nghc1gadtzo7xxh9z7r13tbtxpprqkq7v12b507gm7ph63qvyzskq9nsfqhvi66foz3g5fzw3z6xsce0r3eiwv1p9ilqis002955qc8yszjolwyv17xv8lp70naysfqcfic56nvmsc1kyur0ezzai97d5t9myjv462fi4i2sz3gjptdchxkxyzfkg8co56fl6xx43i6bo2b3veogejzycvunfi4vx1b6ql3g5xq8ok6g5vx7d43mhqrlykmgbjdspac4hwctx9l9yri68uffbnqupsdrc9ukrg842xz1v8pnecqftfa6wkx2o0ipfoo2o78fo0zjztv07etw61pwlq1onumte271284laupj6d888xgz6ojaik2zysx3mc54hkaklnen3jdya9rtx7ifzrga5y1rde07ewpcxnywpwltgd0w8gys4t7w951rl0ewzswa76s2zwdpxfiwhrxgtqzl4i227h8maaczlawfnyv363lw6xpnjeuipea69ydia5uy6wfnmfb471rtkl7hzxh4a46bwwtc330gic5sdo40ywo3pua64soy78jkeq8l6k3wxbi5 == \6\4\w\0\3\y\o\y\5\k\g\2\g\0\f\3\n\f\o\q\m\6\d\8\k\n\m\3\c\m\o\u\9\5\g\4\9\9\4\q\s\k\o\g\6\0\6\p\q\h\0\3\k\v\v\r\8\3\e\7\m\4\p\v\2\1\6\5\w\w\m\c\6\2\j\m\4\2\3\4\p\r\t\h\v\9\l\v\j\b\l\q\x\1\i\c\z\j\n\j\7\9\f\z\8\a\9\j\3\w\x\o\4\l\0\x\r\i\t\p\q\n\m\8\c\h\c\r\y\f\a\7\i\0\1\p\x\7\z\s\5\5\i\9\3\r\l\u\f\x\k\t\t\s\j\f\w\q\z\n\1\5\l\b\w\v\t\b\n\e\z\o\e\t\v\f\e\r\8\h\2\j\t\c\2\x\o\y\q\6\t\v\0\n\m\j\1\x\k\y\e\q\y\4\6\c\w\x\e\e\l\9\r\l\f\r\x\m\1\i\k\a\8\a\i\p\o\0\r\j\5\w\c\z\4\z\r\g\z\1\c\n\o\d\3\4\e\3\u\4\e\h\b\6\x\k\p\n\7\n\w\x\7\h\2\z\j\m\e\t\w\p\v\7\h\5\d\0\0\i\r\p\8\r\f\n\e\a\z\o\f\5\n\7\4\h\p\3\p\h\l\h\y\2\1\v\7\f\4\r\e\o\c\9\e\g\m\5\2\u\u\3\a\n\g\6\3\l\q\c\m\e\w\o\r\g\6\u\q\1\6\f\6\x\r\v\4\9\f\1\5\l\f\s\f\z\h\s\8\u\i\3\g\2\t\o\0\2\k\5\l\o\f\4\5\z\3\a\t\w\r\j\h\t\0\d\k\2\q\s\0\a\y\i\o\q\6\e\e\m\0\8\b\t\g\b\u\f\g\u\r\a\c\w\5\u\f\s\g\8\u\2\z\k\r\9\x\s\z\w\4\4\j\t\9\u\p\l\0\4\v\z\z\o\w\v\g\c\e\k\q\0\b\s\4\y\f\u\e\b\g\d\b\4\8\x\v\9\f\a\y\v\7\5\f\v\5\s\f\n\j\5\7\n\2\i\u\i\9\2\3\e\v\w\o\l\8\4\z\i\d\3\w\2\i\q\c\4\z\k\i\j\l\u\s\i\g\d\r\p\d\a\a\9\b\x\n\q\j\y\l\b\1\l\k\f\l\q\4\z\1\i\m\r\u\u\2\u\z\w\z\7\p\v\h\0\n\j\v\4\m\s\a\r\2\q\8\0\t\z\l\a\m\b\k\8\j\m\7\h\9\x\y\a\d\0\o\a\q\7\4\2\q\d\5\h\6\6\6\0\m\t\g\z\a\l\z\r\k\t\j\v\r\l\c\l\g\a\t\h\z\k\b\8\7\b\r\k\3\i\q\z\u\a\3\8\9\f\9\2\t\0\b\c\4\x\z\v\e\b\v\2\w\n\f\o\o\9\d\x\5\w\k\3\p\k\d\f\a\a\k\q\0\x\n\5\0\o\h\u\r\9\9\h\u\i\t\m\g\c\j\7\i\h\o\v\f\j\s\r\m\8\2\0\z\s\3\5\0\y\p\h\l\j\9\x\p\s\3\w\s\1\h\p\3\e\g\g\5\o\j\m\r\0\5\p\p\f\d\i\j\m\4\k\h\h\l\t\d\1\i\0\4\s\r\6\6\5\2\k\v\n\p\b\0\e\n\6\w\l\t\r\3\p\x\1\c\o\3\6\p\e\h\k\u\z\2\k\t\q\k\e\z\e\b\e\5\b\k\t\q\o\6\t\i\1\x\4\y\r\9\e\f\o\o\e\4\m\x\i\e\3\p\r\6\7\m\9\e\c\f\8\f\s\u\m\7\3\z\a\4\7\s\w\l\y\l\q\x\k\g\y\r\b\c\4\e\3\a\2\o\i\3\q\j\u\a\i\g\3\y\7\n\0\f\v\a\j\g\1\y\2\3\i\5\o\z\j\6\4\f\q\q\x\x\7\u\1\7\9\7\4\4\o\x\s\5\2\j\9\f\b\o\4\7\y\q\q\b\m\d\y\1\x\l\a\n\d\v\4\9\m\1\3\5\5\p\j\k\x\c\i\i\k\r\h\s\2\2\l\i\u\3\1\a\f\y\t\8\r\a\r\t\n\n\6\j\b\i\5\q\8\b\0\t\l\8\p\u\r\j\l\n\6\n\p\e\o\l\9\y\3\j\a\r\o\n\t\6\4\x\q\h\h\u\z\q\e\q\x\g\1\v\v\2\q\k\k\e\v\v\s\4\h\n\e\s\u\e\z\f\8\s\s\1\h\o\d\u\g\t\c\p\0\4\g\b\2\l\7\j\r\o\u\d\b\t\r\p\h\b\8\6\b\z\z\o\i\k\i\z\z\j\6\i\8\b\u\2\9\x\p\i\r\6\z\p\1\n\m\e\y\2\9\w\b\t\p\a\x\u\o\g\l\k\0\m\d\z\g\2\h\6\7\4\a\j\0\2\n\5\l\3\v\k\c\b\n\a\5\k\v\c\9\p\z\e\8\q\e\o\8\l\i\k\u\w\t\7\n\p\t\c\6\u\i\r\g\m\v\e\k\1\z\y\x\x\b\x\r\f\o\8\o\t\y\5\n\1\s\t\3\7\2\h\0\0\m\1\3\n\e\7\1\r\2\5\8\o\o\2\t\e\b\7\2\x\o\5\5\r\k\y\v\c\z\x\n\f\o\n\h\o\i\2\n\8\z\y\u\l\6\8\e\p\5\8\k\s\7\z\w\r\e\m\l\4\c\f\o\x\b\4\y\d\c\n\y\d\w\c\w\0\5\9\j\a\s\w\c\x\6\s\x\v\w\r\m\n\v\k\v\r\7\b\9\p\7\o\e\e\6\n\v\0\g\v\3\n\h\v\m\9\i\e\8\g\s\c\e\k\2\d\e\l\9\t\q\6\t\l\i\8\a\l\i\0\i\y\0\l\9\r\w\a\2\s\q\i\r\e\j\j\o\e\a\s\9\l\2\8\4\6\e\n\m\a\5\7\o\h\x\x\e\4\m\b\k\e\y\h\z\k\y\g\y\4\2\8\8\d\h\p\k\6\f\t\3\v\3\g\7\o\7\j\z\d\e\i\v\v\n\b\s\p\3\3\y\k\w\u\h\x\3\r\4\2\a\a\1\j\q\g\d\w\2\9\h\h\3\i\l\h\l\h\g\c\j\v\g\y\f\w\o\r\c\o\x\2\s\k\i\l\m\e\p\i\5\p\1\n\9\o\3\i\y\h\s\t\l\d\e\6\6\4\e\i\l\b\v\j\8\w\y\8\l\u\d\p\h\g\w\h\e\2\f\s\v\d\3\v\b\g\t\1\z\8\c\c\3\n\b\p\c\8\r\c\f\h\f\b\k\z\a\s\6\e\1\t\5\h\y\r\k\x\x\b\r\e\b\5\i\2\y\7\a\j\6\c\5\0\j\8\e\r\y\k\g\1\g\p\7\s\s\9\3\h\u\6\f\v\u\i\q\l\a\k\3\v\c\t\1\o\x\k\t\u\t\o\7\5\y\8\k\q\4\f\s\x\j\s\i\0\o\9\a\w\d\d\w\w\d\q\a\n\o\q\a\4\k\g\3\c\6\q\u\d\b\g\2\v\w\k\r\g\2\a\o\v\4\w\x\2\d\t\1\r\0\m\3\8\i\p\c\6\e\4\8\9\l\q\v\s\c\n\2\c\f\u\r\n\t\2\c\r\g\b\1\8\t\f\x\q\7\2\a\p\r\o\4\t\2\h\s\3\m\x\p\f\s\g\7\c\d\6\q\i\h\h\4\j\e\7\h\c\f\n\6\7\j\8\t\c\g\r\7\u\l\r\z\p\5\g\e\f\l\s\9\c\w\0\d\m\a\7\1\o\q\6\c\w\o\8\7\o\g\e\1\2\k\w\u\t\2\p\o\e\e\b\j\p\w\a\w\3\i\p\8\z\6\b\p\8\5\y\f\9\y\g\j\7\a\9\x\y\h\6\5\0\z\s\k\m\m\v\9\b\2\8\8\i\s\i\5\e\8\r\v\l\h\j\s\s\g\x\b\o\9\a\4\m\k\e\o\e\t\s\7\z\e\b\f\w\9\9\m\8\1\k\9\k\k\i\d\k\9\w\4\8\u\b\t\w\4\z\v\d\g\2\4\9\n\m\3\p\p\7\z\x\p\m\z\r\a\y\a\6\4\y\7\q\7\1\3\c\w\m\k\w\h\q\c\v\y\t\6\z\z\u\h\t\b\u\6\y\s\o\x\h\2\b\j\x\x\i\m\b\d\4\m\w\x\a\6\n\7\7\t\v\z\6\g\7\8\c\b\c\o\8\l\q\e\5\j\z\5\o\q\3\g\o\2\4\g\f\p\x\8\f\r\n\p\p\a\k\x\h\c\9\m\n\9\u\b\8\i\h\5\w\f\d\0\c\v\x\t\w\9\b\f\9\0\6\5\6\4\i\j\q\e\1\d\7\w\u\z\g\o\s\5\u\z\g\6\2\n\p\y\j\e\3\u\9\d\d\y\9\0\g\7\t\7\j\6\w\5\6\d\5\w\4\1\p\i\j\f\o\p\h\y\z\x\b\z\9\u\d\w\1\c\p\p\9\v\a\r\e\n\i\u\7\p\s\h\q\n\c\p\i\k\z\6\b\u\9\f\m\c\r\2\m\0\t\g\a\h\u\4\c\u\d\n\s\k\d\8\n\n\4\l\c\j\4\q\f\a\y\v\4\d\7\o\x\f\u\n\9\8\l\y\a\7\d\c\j\t\u\j\a\5\9\i\d\e\n\t\f\o\6\t\v\5\e\9\0\q\j\d\2\i\5\w\0\n\y\q\z\3\2\t\r\q\c\i\0\e\3\n\g\x\r\b\2\j\p\o\n\i\r\1\0\c\9\a\8\f\z\y\9\6\9\a\k\5\0\m\i\8\k\r\0\1\1\7\x\w\u\h\o\1\w\5\v\2\d\v\y\i\3\b\9\h\g\1\e\p\e\5\w\h\6\6\6\u\2\n\p\n\r\z\n\i\e\j\j\g\z\1\n\3\3\n\o\4\s\4\y\1\i\n\n\3\k\j\j\1\u\d\c\r\s\u\b\4\b\y\9\z\s\b\y\e\8\w\5\b\9\u\g\l\c\t\p\f\l\6\x\c\r\6\q\r\1\6\6\0\0\e\j\i\t\w\2\9\s\s\4\n\j\d\n\z\x\3\y\g\3\y\h\3\c\m\6\h\f\m\6\w\9\p\x\f\s\h\f\0\v\c\1\n\h\9\9\x\s\z\j\s\c\b\4\o\4\h\7\b\d\2\c\f\a\e\7\5\h\i\x\1\w\o\a\0\5\d\7\r\i\t\a\k\3\h\w\m\0\k\p\5\g\v\s\w\3\z\z\x\1\h\d\5\m\a\9\a\g\a\d\c\p\8\t\h\1\k\8\k\j\x\p\i\o\p\8\5\i\y\7\7\t\k\d\6\4\a\1\s\8\9\n\h\5\l\f\o\r\t\f\0\n\n\p\i\r\r\n\4\s\v\w\g\s\h\a\c\5\z\8\i\t\w\g\m\n\3\r\2\2\f\p\3\8\d\6\6\q\y\7\l\r\q\h\e\w\h\o\5\k\r\v\h\c\a\t\h\g\z\e\k\3\x\d\y\1\5\q\h\w\3\3\e\2\5\v\j\0\v\l\2\s\c\7\9\q\h\6\4\3\7\3\c\7\y\q\a\5\m\4\q\l\1\2\b\9\i\8\j\q\n\s\7\8\x\g\i\i\l\k\8\2\8\1\0\p\v\4\m\5\f\m\s\u\9\u\d\l\u\g\r\y\h\r\k\r\3\y\a\w\0\q\f\i\b\1\2\h\m\n\j\t\4\s\s\m\g\5\2\4\i\n\n\9\v\y\d\k\1\z\8\u\w\k\1\a\y\i\j\6\i\2\p\8\g\q\e\d\1\6\4\o\m\x\n\f\1\l\k\o\c\3\x\f\w\g\v\r\8\g\v\t\a\x\q\7\2\i\d\j\r\u\r\v\n\m\a\a\w\k\i\0\w\r\1\m\3\5\a\d\n\a\s\j\i\h\v\t\y\4\w\6\u\j\5\3\3\7\v\g\p\k\y\x\y\m\u\p\n\r\n\v\z\f\o\j\2\e\x\g\q\i\6\w\b\4\0\v\2\r\x\v\7\b\c\y\n\s\h\5\h\7\x\s\n\p\3\e\8\k\u\i\i\8\b\t\s\p\j\3\w\h\e\4\p\6\o\j\b\j\m\l\b\u\8\m\u\n\8\6\o\4\o\9\g\l\7\p\0\7\a\g\9\m\y\2\d\4\x\q\8\t\q\j\6\o\l\x\2\6\a\3\r\1\6\3\8\5\g\6\j\7\d\o\r\c\r\m\p\o\k\a\3\b\r\n\4\2\n\n\4\u\q\g\e\e\c\i\r\q\5\r\j\u\8\x\c\4\5\t\2\c\9\d\r\f\i\d\6\f\j\h\j\6\2\p\m\b\w\3\t\2\n\q\i\f\6\z\o\p\k\5\a\w\r\p\p\v\3\7\d\i\q\t\j\1\n\3\w\4\h\n\f\s\8\t\l\b\7\e\2\6\4\n\u\j\0\i\s\y\v\y\0\e\7\s\3\6\f\4\o\t\r\v\k\g\j\4\y\k\u\3\c\5\8\0\8\w\8\i\4\s\y\l\2\t\9\x\6\y\q\t\a\z\z\d\n\8\4\f\g\u\i\e\2\p\a\4\d\0\9\m\u\r\5\3\p\g\q\o\2\t\b\g\a\g\m\p\j\z\8\5\y\z\1\c\k\y\4\m\3\7\1\o\d\z\w\7\n\1\t\x\f\d\n\j\x\f\c\b\x\0\o\s\a\6\n\2\d\9\d\j\s\o\m\r\u\7\k\u\1\u\i\7\h\f\p\o\c\1\x\p\k\4\h\m\b\d\p\o\y\w\h\v\n\4\w\h\x\e\9\4\0\s\y\g\w\x\i\y\r\3\t\x\l\x\w\l\r\3\1\t\c\e\s\z\e\k\j\n\4\u\2\k\m\o\s\v\m\2\k\v\0\w\v\2\3\b\u\5\r\n\q\x\i\a\8\a\q\k\c\6\2\c\s\r\q\x\t\q\e\j\9\h\b\u\h\e\3\q\v\3\o\d\x\y\q\t\x\z\s\8\y\i\e\k\m\3\2\i\w\x\u\w\s\w\a\c\m\w\w\1\0\i\k\1\8\h\s\6\7\0\6\5\k\n\1\8\5\j\4\q\d\l\1\3\1\j\n\d\s\l\p\b\h\x\2\e\8\5\p\n\q\a\z\b\6\v\z\b\j\1\p\f\1\g\9\f\d\6\e\z\q\o\s\4\p\2\7\p\w\t\k\u\6\4\g\a\o\a\8\i\7\w\w\q\l\8\u\p\4\i\8\7\9\3\4\8\1\1\0\u\q\h\q\o\6\q\3\m\p\i\o\a\h\w\g\3\o\i\y\g\l\g\z\f\7\c\x\a\9\z\k\d\6\s\9\h\l\n\7\w\2\x\c\4\h\k\n\d\x\2\i\0\2\z\s\u\k\g\y\k\9\t\i\6\b\k\c\m\o\9\2\k\a\t\8\i\7\u\x\x\o\k\u\r\k\x\7\g\l\j\o\f\b\x\r\x\2\c\s\s\l\h\q\q\w\h\7\o\2\1\h\s\t\d\0\2\o\p\x\j\b\d\s\f\i\r\q\f\m\a\3\g\9\c\0\l\q\e\6\n\6\a\4\s\f\f\g\1\q\p\h\w\p\p\c\y\9\o\n\x\e\m\2\n\g\h\c\1\g\a\d\t\z\o\7\x\x\h\9\z\7\r\1\3\t\b\t\x\p\p\r\q\k\q\7\v\1\2\b\5\0\7\g\m\7\p\h\6\3\q\v\y\z\s\k\q\9\n\s\f\q\h\v\i\6\6\f\o\z\3\g\5\f\z\w\3\z\6\x\s\c\e\0\r\3\e\i\w\v\1\p\9\i\l\q\i\s\0\0\2\9\5\5\q\c\8\y\s\z\j\o\l\w\y\v\1\7\x\v\8\l\p\7\0\n\a\y\s\f\q\c\f\i\c\5\6\n\v\m\s\c\1\k\y\u\r\0\e\z\z\a\i\9\7\d\5\t\9\m\y\j\v\4\6\2\f\i\4\i\2\s\z\3\g\j\p\t\d\c\h\x\k\x\y\z\f\k\g\8\c\o\5\6\f\l\6\x\x\4\3\i\6\b\o\2\b\3\v\e\o\g\e\j\z\y\c\v\u\n\f\i\4\v\x\1\b\6\q\l\3\g\5\x\q\8\o\k\6\g\5\v\x\7\d\4\3\m\h\q\r\l\y\k\m\g\b\j\d\s\p\a\c\4\h\w\c\t\x\9\l\9\y\r\i\6\8\u\f\f\b\n\q\u\p\s\d\r\c\9\u\k\r\g\8\4\2\x\z\1\v\8\p\n\e\c\q\f\t\f\a\6\w\k\x\2\o\0\i\p\f\o\o\2\o\7\8\f\o\0\z\j\z\t\v\0\7\e\t\w\6\1\p\w\l\q\1\o\n\u\m\t\e\2\7\1\2\8\4\l\a\u\p\j\6\d\8\8\8\x\g\z\6\o\j\a\i\k\2\z\y\s\x\3\m\c\5\4\h\k\a\k\l\n\e\n\3\j\d\y\a\9\r\t\x\7\i\f\z\r\g\a\5\y\1\r\d\e\0\7\e\w\p\c\x\n\y\w\p\w\l\t\g\d\0\w\8\g\y\s\4\t\7\w\9\5\1\r\l\0\e\w\z\s\w\a\7\6\s\2\z\w\d\p\x\f\i\w\h\r\x\g\t\q\z\l\4\i\2\2\7\h\8\m\a\a\c\z\l\a\w\f\n\y\v\3\6\3\l\w\6\x\p\n\j\e\u\i\p\e\a\6\9\y\d\i\a\5\u\y\6\w\f\n\m\f\b\4\7\1\r\t\k\l\7\h\z\x\h\4\a\4\6\b\w\w\t\c\3\3\0\g\i\c\5\s\d\o\4\0\y\w\o\3\p\u\a\6\4\s\o\y\7\8\j\k\e\q\8\l\6\k\3\w\x\b\i\5 ]] 00:08:14.695 00:08:14.695 real 0m1.374s 00:08:14.695 user 0m0.921s 00:08:14.695 sys 0m0.332s 00:08:14.695 04:11:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.695 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.695 04:11:27 -- dd/basic_rw.sh@1 -- # cleanup 00:08:14.695 04:11:27 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:14.695 04:11:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.695 04:11:27 -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.695 04:11:27 -- dd/common.sh@12 -- # local size=0xffff 00:08:14.695 04:11:27 -- dd/common.sh@14 -- # local bs=1048576 00:08:14.695 04:11:27 -- dd/common.sh@15 -- # local count=1 00:08:14.695 04:11:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:14.695 04:11:27 -- dd/common.sh@18 -- # gen_conf 00:08:14.695 04:11:27 -- dd/common.sh@31 -- # xtrace_disable 00:08:14.695 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:08:14.695 [2024-12-06 04:11:27.233938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:14.695 [2024-12-06 04:11:27.234050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70245 ] 00:08:14.695 { 00:08:14.695 "subsystems": [ 00:08:14.695 { 00:08:14.695 "subsystem": "bdev", 00:08:14.695 "config": [ 00:08:14.695 { 00:08:14.695 "params": { 00:08:14.695 "trtype": "pcie", 00:08:14.695 "traddr": "0000:00:06.0", 00:08:14.695 "name": "Nvme0" 00:08:14.695 }, 00:08:14.695 "method": "bdev_nvme_attach_controller" 00:08:14.695 }, 00:08:14.695 { 00:08:14.695 "method": "bdev_wait_for_examine" 00:08:14.695 } 00:08:14.695 ] 00:08:14.695 } 00:08:14.695 ] 00:08:14.695 } 00:08:14.954 [2024-12-06 04:11:27.373349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.954 [2024-12-06 04:11:27.459904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.212  [2024-12-06T04:11:28.036Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.471 00:08:15.471 04:11:27 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.471 ************************************ 00:08:15.471 END TEST spdk_dd_basic_rw 00:08:15.471 ************************************ 00:08:15.471 00:08:15.471 real 0m18.047s 00:08:15.472 user 0m12.568s 00:08:15.472 sys 0m4.080s 00:08:15.472 04:11:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.472 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:08:15.472 04:11:27 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.472 04:11:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.472 04:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.472 04:11:27 -- common/autotest_common.sh@10 -- # set +x 00:08:15.472 ************************************ 00:08:15.472 START TEST spdk_dd_posix 00:08:15.472 ************************************ 00:08:15.472 04:11:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.472 * Looking for test storage... 00:08:15.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.472 04:11:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:15.472 04:11:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:15.472 04:11:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:15.731 04:11:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:15.731 04:11:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:15.731 04:11:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:15.731 04:11:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:15.731 04:11:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:15.731 04:11:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.731 04:11:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:15.731 04:11:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:15.731 04:11:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:15.731 04:11:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:15.731 04:11:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:15.731 04:11:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:15.731 04:11:28 -- scripts/common.sh@344 -- # : 1 00:08:15.731 04:11:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:15.731 04:11:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.731 04:11:28 -- scripts/common.sh@364 -- # decimal 1 00:08:15.731 04:11:28 -- scripts/common.sh@352 -- # local d=1 00:08:15.731 04:11:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.731 04:11:28 -- scripts/common.sh@354 -- # echo 1 00:08:15.731 04:11:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:15.731 04:11:28 -- scripts/common.sh@365 -- # decimal 2 00:08:15.731 04:11:28 -- scripts/common.sh@352 -- # local d=2 00:08:15.731 04:11:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.731 04:11:28 -- scripts/common.sh@354 -- # echo 2 00:08:15.731 04:11:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:15.731 04:11:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:15.731 04:11:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:15.731 04:11:28 -- scripts/common.sh@367 -- # return 0 00:08:15.731 04:11:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.731 --rc genhtml_branch_coverage=1 00:08:15.731 --rc genhtml_function_coverage=1 00:08:15.731 --rc genhtml_legend=1 00:08:15.731 --rc geninfo_all_blocks=1 00:08:15.731 --rc geninfo_unexecuted_blocks=1 00:08:15.731 00:08:15.731 ' 00:08:15.731 04:11:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.731 04:11:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.731 04:11:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.731 04:11:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.731 04:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 04:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 04:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 04:11:28 -- paths/export.sh@5 -- # export PATH 00:08:15.731 04:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.731 04:11:28 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:15.731 04:11:28 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:15.731 04:11:28 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:15.731 04:11:28 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:15.731 04:11:28 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.731 04:11:28 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.731 04:11:28 -- dd/posix.sh@130 -- # tests 00:08:15.731 04:11:28 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:15.731 * First test run, liburing in use 00:08:15.731 04:11:28 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:15.731 04:11:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.731 04:11:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.731 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.731 ************************************ 00:08:15.731 START TEST dd_flag_append 00:08:15.731 ************************************ 00:08:15.731 04:11:28 -- common/autotest_common.sh@1114 -- # append 00:08:15.731 04:11:28 -- dd/posix.sh@16 -- # local dump0 00:08:15.731 04:11:28 -- dd/posix.sh@17 -- # local dump1 00:08:15.731 04:11:28 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:15.731 04:11:28 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.731 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.731 04:11:28 -- dd/posix.sh@19 -- # dump0=vki8myro8k1lkzeatgwz7rjkvpw4x0yu 00:08:15.731 04:11:28 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:15.731 04:11:28 -- dd/common.sh@98 -- # xtrace_disable 00:08:15.731 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.731 04:11:28 -- dd/posix.sh@20 -- # dump1=5raihjwmrt767xenw7qo14kxpdervvo6 00:08:15.731 04:11:28 -- dd/posix.sh@22 -- # printf %s vki8myro8k1lkzeatgwz7rjkvpw4x0yu 00:08:15.731 04:11:28 -- dd/posix.sh@23 -- # printf %s 5raihjwmrt767xenw7qo14kxpdervvo6 00:08:15.731 04:11:28 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:15.731 [2024-12-06 04:11:28.165962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.731 [2024-12-06 04:11:28.166092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70309 ] 00:08:15.990 [2024-12-06 04:11:28.303971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.990 [2024-12-06 04:11:28.393088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.990  [2024-12-06T04:11:28.814Z] Copying: 32/32 [B] (average 31 kBps) 00:08:16.249 00:08:16.249 04:11:28 -- dd/posix.sh@27 -- # [[ 5raihjwmrt767xenw7qo14kxpdervvo6vki8myro8k1lkzeatgwz7rjkvpw4x0yu == \5\r\a\i\h\j\w\m\r\t\7\6\7\x\e\n\w\7\q\o\1\4\k\x\p\d\e\r\v\v\o\6\v\k\i\8\m\y\r\o\8\k\1\l\k\z\e\a\t\g\w\z\7\r\j\k\v\p\w\4\x\0\y\u ]] 00:08:16.249 00:08:16.250 real 0m0.595s 00:08:16.250 user 0m0.325s 00:08:16.250 sys 0m0.151s 00:08:16.250 04:11:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.250 ************************************ 00:08:16.250 END TEST dd_flag_append 00:08:16.250 ************************************ 00:08:16.250 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.250 04:11:28 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:16.250 04:11:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.250 04:11:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.250 04:11:28 -- common/autotest_common.sh@10 -- # set +x 00:08:16.250 ************************************ 00:08:16.250 START TEST dd_flag_directory 00:08:16.250 ************************************ 00:08:16.250 04:11:28 -- common/autotest_common.sh@1114 -- # directory 00:08:16.250 04:11:28 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.250 04:11:28 -- common/autotest_common.sh@650 -- # local es=0 00:08:16.250 04:11:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.250 04:11:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.250 04:11:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.250 04:11:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.250 04:11:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.250 04:11:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.250 04:11:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.250 04:11:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.250 04:11:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.250 04:11:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.250 [2024-12-06 04:11:28.812341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.250 [2024-12-06 04:11:28.812466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70336 ] 00:08:16.509 [2024-12-06 04:11:28.952318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.509 [2024-12-06 04:11:29.040324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.769 [2024-12-06 04:11:29.124646] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.769 [2024-12-06 04:11:29.124696] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.769 [2024-12-06 04:11:29.124710] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.769 [2024-12-06 04:11:29.234045] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:16.769 04:11:29 -- common/autotest_common.sh@653 -- # es=236 00:08:16.769 04:11:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.769 04:11:29 -- common/autotest_common.sh@662 -- # es=108 00:08:16.769 04:11:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:16.769 04:11:29 -- common/autotest_common.sh@670 -- # es=1 00:08:16.769 04:11:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.769 04:11:29 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:16.769 04:11:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:16.769 04:11:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:16.769 04:11:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.769 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.769 04:11:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.769 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.769 04:11:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.769 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.769 04:11:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.769 04:11:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.769 04:11:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:17.028 [2024-12-06 04:11:29.368693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.028 [2024-12-06 04:11:29.368798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70351 ] 00:08:17.028 [2024-12-06 04:11:29.506825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.287 [2024-12-06 04:11:29.596953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.287 [2024-12-06 04:11:29.682036] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.287 [2024-12-06 04:11:29.682088] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.287 [2024-12-06 04:11:29.682111] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.287 [2024-12-06 04:11:29.792724] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:17.546 04:11:29 -- common/autotest_common.sh@653 -- # es=236 00:08:17.546 04:11:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.546 04:11:29 -- common/autotest_common.sh@662 -- # es=108 00:08:17.546 04:11:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.546 04:11:29 -- common/autotest_common.sh@670 -- # es=1 00:08:17.546 04:11:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.546 00:08:17.546 real 0m1.118s 00:08:17.546 user 0m0.619s 00:08:17.547 sys 0m0.289s 00:08:17.547 04:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.547 04:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:17.547 ************************************ 00:08:17.547 END TEST dd_flag_directory 00:08:17.547 ************************************ 00:08:17.547 04:11:29 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:17.547 04:11:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.547 04:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.547 04:11:29 -- common/autotest_common.sh@10 -- # set +x 00:08:17.547 ************************************ 00:08:17.547 START TEST dd_flag_nofollow 00:08:17.547 ************************************ 00:08:17.547 04:11:29 -- common/autotest_common.sh@1114 -- # nofollow 00:08:17.547 04:11:29 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:17.547 04:11:29 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:17.547 04:11:29 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:17.547 04:11:29 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:17.547 04:11:29 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.547 04:11:29 -- common/autotest_common.sh@650 -- # local es=0 00:08:17.547 04:11:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.547 04:11:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.547 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.547 04:11:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.547 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.547 04:11:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.547 04:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.547 04:11:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.547 04:11:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.547 04:11:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.547 [2024-12-06 04:11:29.977326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.547 [2024-12-06 04:11:29.977438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70374 ] 00:08:17.547 [2024-12-06 04:11:30.109269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.804 [2024-12-06 04:11:30.197891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.804 [2024-12-06 04:11:30.283206] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:17.804 [2024-12-06 04:11:30.283281] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:17.804 [2024-12-06 04:11:30.283297] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.062 [2024-12-06 04:11:30.392320] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.062 04:11:30 -- common/autotest_common.sh@653 -- # es=216 00:08:18.062 04:11:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.062 04:11:30 -- common/autotest_common.sh@662 -- # es=88 00:08:18.062 04:11:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.062 04:11:30 -- common/autotest_common.sh@670 -- # es=1 00:08:18.062 04:11:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.062 04:11:30 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.062 04:11:30 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.062 04:11:30 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.062 04:11:30 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 04:11:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 04:11:30 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 04:11:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 04:11:30 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 04:11:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.062 04:11:30 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.062 04:11:30 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.062 04:11:30 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:18.062 [2024-12-06 04:11:30.521863] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.062 [2024-12-06 04:11:30.521957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70389 ] 00:08:18.321 [2024-12-06 04:11:30.659676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.321 [2024-12-06 04:11:30.743319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.321 [2024-12-06 04:11:30.825882] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:18.321 [2024-12-06 04:11:30.826189] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:18.321 [2024-12-06 04:11:30.826211] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.579 [2024-12-06 04:11:30.936427] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.579 04:11:31 -- common/autotest_common.sh@653 -- # es=216 00:08:18.579 04:11:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.579 04:11:31 -- common/autotest_common.sh@662 -- # es=88 00:08:18.579 04:11:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.579 04:11:31 -- common/autotest_common.sh@670 -- # es=1 00:08:18.579 04:11:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.579 04:11:31 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:18.579 04:11:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.579 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:08:18.579 04:11:31 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.580 [2024-12-06 04:11:31.078248] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:18.580 [2024-12-06 04:11:31.078363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70391 ] 00:08:18.838 [2024-12-06 04:11:31.218491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.838 [2024-12-06 04:11:31.297398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.838  [2024-12-06T04:11:31.660Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.095 00:08:19.095 ************************************ 00:08:19.095 END TEST dd_flag_nofollow 00:08:19.095 ************************************ 00:08:19.095 04:11:31 -- dd/posix.sh@49 -- # [[ xftgxtqu5edscag2vnhe6kidhpavwy8g9d6fqx4r47xkm5r2dsu2y3anpyimh97f78gsgo95scc5domb83xzh4pe0up53jhpe2ks97cy36jyp4l4n44fki2gbxbyh0w99bqdp6mywrtzfdoxjd0od66tyw17izpgklsom6jytvje08a7fk2ta8yfrcbrzwobmz0k0q1y4ueeicg6y26uun3oqlxxte7on29gjcdeyctey3xy0ok04ukqolqkcxicc6us1ez920pmlkp1h0tmvlnuv08dvxicj2b8f4fzo14jjb0ehp5bklllbc84tpkwmpgwvnff8xhc1uidhdqs3xpwvtfvrv6zpaxjxtybosxdkz0hz12mz9weo962s70eiqd79sog1lsm8jb8scaklhfpbbr1pcs6v0txlp0pbxux6a3zj2e0qt68prlgq0y4n5qydih49oju5c8qsvfdl181gcokhy9gu86kmvuo58hc6mta4y47x5kylhrd5un1 == \x\f\t\g\x\t\q\u\5\e\d\s\c\a\g\2\v\n\h\e\6\k\i\d\h\p\a\v\w\y\8\g\9\d\6\f\q\x\4\r\4\7\x\k\m\5\r\2\d\s\u\2\y\3\a\n\p\y\i\m\h\9\7\f\7\8\g\s\g\o\9\5\s\c\c\5\d\o\m\b\8\3\x\z\h\4\p\e\0\u\p\5\3\j\h\p\e\2\k\s\9\7\c\y\3\6\j\y\p\4\l\4\n\4\4\f\k\i\2\g\b\x\b\y\h\0\w\9\9\b\q\d\p\6\m\y\w\r\t\z\f\d\o\x\j\d\0\o\d\6\6\t\y\w\1\7\i\z\p\g\k\l\s\o\m\6\j\y\t\v\j\e\0\8\a\7\f\k\2\t\a\8\y\f\r\c\b\r\z\w\o\b\m\z\0\k\0\q\1\y\4\u\e\e\i\c\g\6\y\2\6\u\u\n\3\o\q\l\x\x\t\e\7\o\n\2\9\g\j\c\d\e\y\c\t\e\y\3\x\y\0\o\k\0\4\u\k\q\o\l\q\k\c\x\i\c\c\6\u\s\1\e\z\9\2\0\p\m\l\k\p\1\h\0\t\m\v\l\n\u\v\0\8\d\v\x\i\c\j\2\b\8\f\4\f\z\o\1\4\j\j\b\0\e\h\p\5\b\k\l\l\l\b\c\8\4\t\p\k\w\m\p\g\w\v\n\f\f\8\x\h\c\1\u\i\d\h\d\q\s\3\x\p\w\v\t\f\v\r\v\6\z\p\a\x\j\x\t\y\b\o\s\x\d\k\z\0\h\z\1\2\m\z\9\w\e\o\9\6\2\s\7\0\e\i\q\d\7\9\s\o\g\1\l\s\m\8\j\b\8\s\c\a\k\l\h\f\p\b\b\r\1\p\c\s\6\v\0\t\x\l\p\0\p\b\x\u\x\6\a\3\z\j\2\e\0\q\t\6\8\p\r\l\g\q\0\y\4\n\5\q\y\d\i\h\4\9\o\j\u\5\c\8\q\s\v\f\d\l\1\8\1\g\c\o\k\h\y\9\g\u\8\6\k\m\v\u\o\5\8\h\c\6\m\t\a\4\y\4\7\x\5\k\y\l\h\r\d\5\u\n\1 ]] 00:08:19.095 00:08:19.095 real 0m1.682s 00:08:19.095 user 0m0.905s 00:08:19.095 sys 0m0.445s 00:08:19.095 04:11:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.095 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.095 04:11:31 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:19.095 04:11:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.095 04:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.095 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.354 ************************************ 00:08:19.354 START TEST dd_flag_noatime 00:08:19.354 ************************************ 00:08:19.354 04:11:31 -- common/autotest_common.sh@1114 -- # noatime 00:08:19.354 04:11:31 -- dd/posix.sh@53 -- # local atime_if 00:08:19.354 04:11:31 -- dd/posix.sh@54 -- # local atime_of 00:08:19.354 04:11:31 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:19.354 04:11:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:19.354 04:11:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.354 04:11:31 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.354 04:11:31 -- dd/posix.sh@60 -- # atime_if=1733458291 00:08:19.354 04:11:31 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.354 04:11:31 -- dd/posix.sh@61 -- # atime_of=1733458291 00:08:19.354 04:11:31 -- dd/posix.sh@66 -- # sleep 1 00:08:20.289 04:11:32 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.289 [2024-12-06 04:11:32.732599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.289 [2024-12-06 04:11:32.732999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70437 ] 00:08:20.604 [2024-12-06 04:11:32.874478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.604 [2024-12-06 04:11:32.967421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.604  [2024-12-06T04:11:33.426Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.861 00:08:20.861 04:11:33 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.861 04:11:33 -- dd/posix.sh@69 -- # (( atime_if == 1733458291 )) 00:08:20.861 04:11:33 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.861 04:11:33 -- dd/posix.sh@70 -- # (( atime_of == 1733458291 )) 00:08:20.861 04:11:33 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.861 [2024-12-06 04:11:33.322955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.861 [2024-12-06 04:11:33.323223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70443 ] 00:08:21.119 [2024-12-06 04:11:33.463251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.119 [2024-12-06 04:11:33.546947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.119  [2024-12-06T04:11:33.955Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.390 00:08:21.390 04:11:33 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.390 ************************************ 00:08:21.390 END TEST dd_flag_noatime 00:08:21.390 ************************************ 00:08:21.390 04:11:33 -- dd/posix.sh@73 -- # (( atime_if < 1733458293 )) 00:08:21.390 00:08:21.390 real 0m2.191s 00:08:21.390 user 0m0.624s 00:08:21.390 sys 0m0.322s 00:08:21.390 04:11:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.390 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:08:21.390 04:11:33 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:21.390 04:11:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.390 04:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.390 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:08:21.390 ************************************ 00:08:21.390 START TEST dd_flags_misc 00:08:21.390 ************************************ 00:08:21.390 04:11:33 -- common/autotest_common.sh@1114 -- # io 00:08:21.390 04:11:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:21.390 04:11:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:21.390 04:11:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:21.390 04:11:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.390 04:11:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.390 04:11:33 -- dd/common.sh@98 -- # xtrace_disable 00:08:21.390 04:11:33 -- common/autotest_common.sh@10 -- # set +x 00:08:21.390 04:11:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.390 04:11:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:21.664 [2024-12-06 04:11:33.957741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:21.664 [2024-12-06 04:11:33.957997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:08:21.664 [2024-12-06 04:11:34.091505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.664 [2024-12-06 04:11:34.168759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.922  [2024-12-06T04:11:34.487Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.922 00:08:21.922 04:11:34 -- dd/posix.sh@93 -- # [[ urn2y44yj04wly8oflriyh9z1nr7uc9a8of18mty9zot6r21d2lb36ofzbrft35vc08pq4vp2un2gt2ymrp0e5v4yarrm1yfo5lmf4pw5bds04ceoxsts2a77t8b4o6amp81l9m28hhk8wuk1321rv78envmr54oqmfhv7q5yl4fdiwcmevjyisuiibb5rqdqfb17v4aggg20b0jn0glpg2zfxkngqbv5j4iqwr15wbcoeoghylilcvu7cb3h521vxkd0dskikgjmb16pb6o6ddexm3oesp2xyq1z53outeyv8e3ae5vwtgcztcgc9dj47b99ct0s9cirl23pfkzckq04ypmfk2f2bsao1gcu0wzl8f8lfykujvs1tom78dpaie2egti3v2kc1kczwxnrp6s0cywblsngncqd1moqy1n5yuf0tdli3l7z6eqhrm7e2vtw3a4agrx74lw07f2sk2ldulsfgqc9la1tojpxfbmdn6hpp7lxm5sicxkv1vm == \u\r\n\2\y\4\4\y\j\0\4\w\l\y\8\o\f\l\r\i\y\h\9\z\1\n\r\7\u\c\9\a\8\o\f\1\8\m\t\y\9\z\o\t\6\r\2\1\d\2\l\b\3\6\o\f\z\b\r\f\t\3\5\v\c\0\8\p\q\4\v\p\2\u\n\2\g\t\2\y\m\r\p\0\e\5\v\4\y\a\r\r\m\1\y\f\o\5\l\m\f\4\p\w\5\b\d\s\0\4\c\e\o\x\s\t\s\2\a\7\7\t\8\b\4\o\6\a\m\p\8\1\l\9\m\2\8\h\h\k\8\w\u\k\1\3\2\1\r\v\7\8\e\n\v\m\r\5\4\o\q\m\f\h\v\7\q\5\y\l\4\f\d\i\w\c\m\e\v\j\y\i\s\u\i\i\b\b\5\r\q\d\q\f\b\1\7\v\4\a\g\g\g\2\0\b\0\j\n\0\g\l\p\g\2\z\f\x\k\n\g\q\b\v\5\j\4\i\q\w\r\1\5\w\b\c\o\e\o\g\h\y\l\i\l\c\v\u\7\c\b\3\h\5\2\1\v\x\k\d\0\d\s\k\i\k\g\j\m\b\1\6\p\b\6\o\6\d\d\e\x\m\3\o\e\s\p\2\x\y\q\1\z\5\3\o\u\t\e\y\v\8\e\3\a\e\5\v\w\t\g\c\z\t\c\g\c\9\d\j\4\7\b\9\9\c\t\0\s\9\c\i\r\l\2\3\p\f\k\z\c\k\q\0\4\y\p\m\f\k\2\f\2\b\s\a\o\1\g\c\u\0\w\z\l\8\f\8\l\f\y\k\u\j\v\s\1\t\o\m\7\8\d\p\a\i\e\2\e\g\t\i\3\v\2\k\c\1\k\c\z\w\x\n\r\p\6\s\0\c\y\w\b\l\s\n\g\n\c\q\d\1\m\o\q\y\1\n\5\y\u\f\0\t\d\l\i\3\l\7\z\6\e\q\h\r\m\7\e\2\v\t\w\3\a\4\a\g\r\x\7\4\l\w\0\7\f\2\s\k\2\l\d\u\l\s\f\g\q\c\9\l\a\1\t\o\j\p\x\f\b\m\d\n\6\h\p\p\7\l\x\m\5\s\i\c\x\k\v\1\v\m ]] 00:08:21.922 04:11:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.922 04:11:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.181 [2024-12-06 04:11:34.523830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.181 [2024-12-06 04:11:34.524152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70487 ] 00:08:22.181 [2024-12-06 04:11:34.664215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.439 [2024-12-06 04:11:34.746745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.439  [2024-12-06T04:11:35.267Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.702 00:08:22.702 04:11:35 -- dd/posix.sh@93 -- # [[ urn2y44yj04wly8oflriyh9z1nr7uc9a8of18mty9zot6r21d2lb36ofzbrft35vc08pq4vp2un2gt2ymrp0e5v4yarrm1yfo5lmf4pw5bds04ceoxsts2a77t8b4o6amp81l9m28hhk8wuk1321rv78envmr54oqmfhv7q5yl4fdiwcmevjyisuiibb5rqdqfb17v4aggg20b0jn0glpg2zfxkngqbv5j4iqwr15wbcoeoghylilcvu7cb3h521vxkd0dskikgjmb16pb6o6ddexm3oesp2xyq1z53outeyv8e3ae5vwtgcztcgc9dj47b99ct0s9cirl23pfkzckq04ypmfk2f2bsao1gcu0wzl8f8lfykujvs1tom78dpaie2egti3v2kc1kczwxnrp6s0cywblsngncqd1moqy1n5yuf0tdli3l7z6eqhrm7e2vtw3a4agrx74lw07f2sk2ldulsfgqc9la1tojpxfbmdn6hpp7lxm5sicxkv1vm == \u\r\n\2\y\4\4\y\j\0\4\w\l\y\8\o\f\l\r\i\y\h\9\z\1\n\r\7\u\c\9\a\8\o\f\1\8\m\t\y\9\z\o\t\6\r\2\1\d\2\l\b\3\6\o\f\z\b\r\f\t\3\5\v\c\0\8\p\q\4\v\p\2\u\n\2\g\t\2\y\m\r\p\0\e\5\v\4\y\a\r\r\m\1\y\f\o\5\l\m\f\4\p\w\5\b\d\s\0\4\c\e\o\x\s\t\s\2\a\7\7\t\8\b\4\o\6\a\m\p\8\1\l\9\m\2\8\h\h\k\8\w\u\k\1\3\2\1\r\v\7\8\e\n\v\m\r\5\4\o\q\m\f\h\v\7\q\5\y\l\4\f\d\i\w\c\m\e\v\j\y\i\s\u\i\i\b\b\5\r\q\d\q\f\b\1\7\v\4\a\g\g\g\2\0\b\0\j\n\0\g\l\p\g\2\z\f\x\k\n\g\q\b\v\5\j\4\i\q\w\r\1\5\w\b\c\o\e\o\g\h\y\l\i\l\c\v\u\7\c\b\3\h\5\2\1\v\x\k\d\0\d\s\k\i\k\g\j\m\b\1\6\p\b\6\o\6\d\d\e\x\m\3\o\e\s\p\2\x\y\q\1\z\5\3\o\u\t\e\y\v\8\e\3\a\e\5\v\w\t\g\c\z\t\c\g\c\9\d\j\4\7\b\9\9\c\t\0\s\9\c\i\r\l\2\3\p\f\k\z\c\k\q\0\4\y\p\m\f\k\2\f\2\b\s\a\o\1\g\c\u\0\w\z\l\8\f\8\l\f\y\k\u\j\v\s\1\t\o\m\7\8\d\p\a\i\e\2\e\g\t\i\3\v\2\k\c\1\k\c\z\w\x\n\r\p\6\s\0\c\y\w\b\l\s\n\g\n\c\q\d\1\m\o\q\y\1\n\5\y\u\f\0\t\d\l\i\3\l\7\z\6\e\q\h\r\m\7\e\2\v\t\w\3\a\4\a\g\r\x\7\4\l\w\0\7\f\2\s\k\2\l\d\u\l\s\f\g\q\c\9\l\a\1\t\o\j\p\x\f\b\m\d\n\6\h\p\p\7\l\x\m\5\s\i\c\x\k\v\1\v\m ]] 00:08:22.702 04:11:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.702 04:11:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:22.702 [2024-12-06 04:11:35.093337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.702 [2024-12-06 04:11:35.093485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70490 ] 00:08:22.702 [2024-12-06 04:11:35.232174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.960 [2024-12-06 04:11:35.308717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.960  [2024-12-06T04:11:35.783Z] Copying: 512/512 [B] (average 166 kBps) 00:08:23.218 00:08:23.218 04:11:35 -- dd/posix.sh@93 -- # [[ urn2y44yj04wly8oflriyh9z1nr7uc9a8of18mty9zot6r21d2lb36ofzbrft35vc08pq4vp2un2gt2ymrp0e5v4yarrm1yfo5lmf4pw5bds04ceoxsts2a77t8b4o6amp81l9m28hhk8wuk1321rv78envmr54oqmfhv7q5yl4fdiwcmevjyisuiibb5rqdqfb17v4aggg20b0jn0glpg2zfxkngqbv5j4iqwr15wbcoeoghylilcvu7cb3h521vxkd0dskikgjmb16pb6o6ddexm3oesp2xyq1z53outeyv8e3ae5vwtgcztcgc9dj47b99ct0s9cirl23pfkzckq04ypmfk2f2bsao1gcu0wzl8f8lfykujvs1tom78dpaie2egti3v2kc1kczwxnrp6s0cywblsngncqd1moqy1n5yuf0tdli3l7z6eqhrm7e2vtw3a4agrx74lw07f2sk2ldulsfgqc9la1tojpxfbmdn6hpp7lxm5sicxkv1vm == \u\r\n\2\y\4\4\y\j\0\4\w\l\y\8\o\f\l\r\i\y\h\9\z\1\n\r\7\u\c\9\a\8\o\f\1\8\m\t\y\9\z\o\t\6\r\2\1\d\2\l\b\3\6\o\f\z\b\r\f\t\3\5\v\c\0\8\p\q\4\v\p\2\u\n\2\g\t\2\y\m\r\p\0\e\5\v\4\y\a\r\r\m\1\y\f\o\5\l\m\f\4\p\w\5\b\d\s\0\4\c\e\o\x\s\t\s\2\a\7\7\t\8\b\4\o\6\a\m\p\8\1\l\9\m\2\8\h\h\k\8\w\u\k\1\3\2\1\r\v\7\8\e\n\v\m\r\5\4\o\q\m\f\h\v\7\q\5\y\l\4\f\d\i\w\c\m\e\v\j\y\i\s\u\i\i\b\b\5\r\q\d\q\f\b\1\7\v\4\a\g\g\g\2\0\b\0\j\n\0\g\l\p\g\2\z\f\x\k\n\g\q\b\v\5\j\4\i\q\w\r\1\5\w\b\c\o\e\o\g\h\y\l\i\l\c\v\u\7\c\b\3\h\5\2\1\v\x\k\d\0\d\s\k\i\k\g\j\m\b\1\6\p\b\6\o\6\d\d\e\x\m\3\o\e\s\p\2\x\y\q\1\z\5\3\o\u\t\e\y\v\8\e\3\a\e\5\v\w\t\g\c\z\t\c\g\c\9\d\j\4\7\b\9\9\c\t\0\s\9\c\i\r\l\2\3\p\f\k\z\c\k\q\0\4\y\p\m\f\k\2\f\2\b\s\a\o\1\g\c\u\0\w\z\l\8\f\8\l\f\y\k\u\j\v\s\1\t\o\m\7\8\d\p\a\i\e\2\e\g\t\i\3\v\2\k\c\1\k\c\z\w\x\n\r\p\6\s\0\c\y\w\b\l\s\n\g\n\c\q\d\1\m\o\q\y\1\n\5\y\u\f\0\t\d\l\i\3\l\7\z\6\e\q\h\r\m\7\e\2\v\t\w\3\a\4\a\g\r\x\7\4\l\w\0\7\f\2\s\k\2\l\d\u\l\s\f\g\q\c\9\l\a\1\t\o\j\p\x\f\b\m\d\n\6\h\p\p\7\l\x\m\5\s\i\c\x\k\v\1\v\m ]] 00:08:23.218 04:11:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.218 04:11:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.218 [2024-12-06 04:11:35.664490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.218 [2024-12-06 04:11:35.664593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70503 ] 00:08:23.476 [2024-12-06 04:11:35.803563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.476 [2024-12-06 04:11:35.883021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.476  [2024-12-06T04:11:36.300Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.735 00:08:23.735 04:11:36 -- dd/posix.sh@93 -- # [[ urn2y44yj04wly8oflriyh9z1nr7uc9a8of18mty9zot6r21d2lb36ofzbrft35vc08pq4vp2un2gt2ymrp0e5v4yarrm1yfo5lmf4pw5bds04ceoxsts2a77t8b4o6amp81l9m28hhk8wuk1321rv78envmr54oqmfhv7q5yl4fdiwcmevjyisuiibb5rqdqfb17v4aggg20b0jn0glpg2zfxkngqbv5j4iqwr15wbcoeoghylilcvu7cb3h521vxkd0dskikgjmb16pb6o6ddexm3oesp2xyq1z53outeyv8e3ae5vwtgcztcgc9dj47b99ct0s9cirl23pfkzckq04ypmfk2f2bsao1gcu0wzl8f8lfykujvs1tom78dpaie2egti3v2kc1kczwxnrp6s0cywblsngncqd1moqy1n5yuf0tdli3l7z6eqhrm7e2vtw3a4agrx74lw07f2sk2ldulsfgqc9la1tojpxfbmdn6hpp7lxm5sicxkv1vm == \u\r\n\2\y\4\4\y\j\0\4\w\l\y\8\o\f\l\r\i\y\h\9\z\1\n\r\7\u\c\9\a\8\o\f\1\8\m\t\y\9\z\o\t\6\r\2\1\d\2\l\b\3\6\o\f\z\b\r\f\t\3\5\v\c\0\8\p\q\4\v\p\2\u\n\2\g\t\2\y\m\r\p\0\e\5\v\4\y\a\r\r\m\1\y\f\o\5\l\m\f\4\p\w\5\b\d\s\0\4\c\e\o\x\s\t\s\2\a\7\7\t\8\b\4\o\6\a\m\p\8\1\l\9\m\2\8\h\h\k\8\w\u\k\1\3\2\1\r\v\7\8\e\n\v\m\r\5\4\o\q\m\f\h\v\7\q\5\y\l\4\f\d\i\w\c\m\e\v\j\y\i\s\u\i\i\b\b\5\r\q\d\q\f\b\1\7\v\4\a\g\g\g\2\0\b\0\j\n\0\g\l\p\g\2\z\f\x\k\n\g\q\b\v\5\j\4\i\q\w\r\1\5\w\b\c\o\e\o\g\h\y\l\i\l\c\v\u\7\c\b\3\h\5\2\1\v\x\k\d\0\d\s\k\i\k\g\j\m\b\1\6\p\b\6\o\6\d\d\e\x\m\3\o\e\s\p\2\x\y\q\1\z\5\3\o\u\t\e\y\v\8\e\3\a\e\5\v\w\t\g\c\z\t\c\g\c\9\d\j\4\7\b\9\9\c\t\0\s\9\c\i\r\l\2\3\p\f\k\z\c\k\q\0\4\y\p\m\f\k\2\f\2\b\s\a\o\1\g\c\u\0\w\z\l\8\f\8\l\f\y\k\u\j\v\s\1\t\o\m\7\8\d\p\a\i\e\2\e\g\t\i\3\v\2\k\c\1\k\c\z\w\x\n\r\p\6\s\0\c\y\w\b\l\s\n\g\n\c\q\d\1\m\o\q\y\1\n\5\y\u\f\0\t\d\l\i\3\l\7\z\6\e\q\h\r\m\7\e\2\v\t\w\3\a\4\a\g\r\x\7\4\l\w\0\7\f\2\s\k\2\l\d\u\l\s\f\g\q\c\9\l\a\1\t\o\j\p\x\f\b\m\d\n\6\h\p\p\7\l\x\m\5\s\i\c\x\k\v\1\v\m ]] 00:08:23.735 04:11:36 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:23.735 04:11:36 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:23.735 04:11:36 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.735 04:11:36 -- common/autotest_common.sh@10 -- # set +x 00:08:23.735 04:11:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.735 04:11:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:23.735 [2024-12-06 04:11:36.240517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.735 [2024-12-06 04:11:36.240638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70505 ] 00:08:23.993 [2024-12-06 04:11:36.380534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.993 [2024-12-06 04:11:36.464368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.993  [2024-12-06T04:11:36.816Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.251 00:08:24.251 04:11:36 -- dd/posix.sh@93 -- # [[ s1el8kuqrn5x5d750ogyo8kbbmj0ik12qo2omltdp9s54mpt00n5w14ywlhmvo4461on0grpa6h6mopg4pta7ipvflma2hhpcm8akk1sx32dvxly13xk6d4pzo7n3yvbg8me0pt459jb78zd58bke6mmz8q3c6mfcsaxeuefwgpa1u91edk4t2mn93a6992xl0fg04d7gdczj8912hd9td2a2cvk7ow6ysco7iy1fx7cr6o38ihkr94ckh7n81tm4nrgr6yogt67vi7xocl0quymddp9v6co5o7ooif6gmmhgc72v5lq616647m5y1z4sh5kvrak72n8fu7zoq9j2xo45tdccdki8d1knq8jgo1c0fiieagv4af9nu58pmh4r2k9xldlwu8krhfr0p99u4y1xh2e21vqxfsyvjwm8deyzcrh2ygpvxmpycbxrtfusehdt98kv2e5bedjgdthivv0n8souadozlls2hmjvb1e2lncm0jh5r56uv5hh8oq == \s\1\e\l\8\k\u\q\r\n\5\x\5\d\7\5\0\o\g\y\o\8\k\b\b\m\j\0\i\k\1\2\q\o\2\o\m\l\t\d\p\9\s\5\4\m\p\t\0\0\n\5\w\1\4\y\w\l\h\m\v\o\4\4\6\1\o\n\0\g\r\p\a\6\h\6\m\o\p\g\4\p\t\a\7\i\p\v\f\l\m\a\2\h\h\p\c\m\8\a\k\k\1\s\x\3\2\d\v\x\l\y\1\3\x\k\6\d\4\p\z\o\7\n\3\y\v\b\g\8\m\e\0\p\t\4\5\9\j\b\7\8\z\d\5\8\b\k\e\6\m\m\z\8\q\3\c\6\m\f\c\s\a\x\e\u\e\f\w\g\p\a\1\u\9\1\e\d\k\4\t\2\m\n\9\3\a\6\9\9\2\x\l\0\f\g\0\4\d\7\g\d\c\z\j\8\9\1\2\h\d\9\t\d\2\a\2\c\v\k\7\o\w\6\y\s\c\o\7\i\y\1\f\x\7\c\r\6\o\3\8\i\h\k\r\9\4\c\k\h\7\n\8\1\t\m\4\n\r\g\r\6\y\o\g\t\6\7\v\i\7\x\o\c\l\0\q\u\y\m\d\d\p\9\v\6\c\o\5\o\7\o\o\i\f\6\g\m\m\h\g\c\7\2\v\5\l\q\6\1\6\6\4\7\m\5\y\1\z\4\s\h\5\k\v\r\a\k\7\2\n\8\f\u\7\z\o\q\9\j\2\x\o\4\5\t\d\c\c\d\k\i\8\d\1\k\n\q\8\j\g\o\1\c\0\f\i\i\e\a\g\v\4\a\f\9\n\u\5\8\p\m\h\4\r\2\k\9\x\l\d\l\w\u\8\k\r\h\f\r\0\p\9\9\u\4\y\1\x\h\2\e\2\1\v\q\x\f\s\y\v\j\w\m\8\d\e\y\z\c\r\h\2\y\g\p\v\x\m\p\y\c\b\x\r\t\f\u\s\e\h\d\t\9\8\k\v\2\e\5\b\e\d\j\g\d\t\h\i\v\v\0\n\8\s\o\u\a\d\o\z\l\l\s\2\h\m\j\v\b\1\e\2\l\n\c\m\0\j\h\5\r\5\6\u\v\5\h\h\8\o\q ]] 00:08:24.251 04:11:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.251 04:11:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:24.251 [2024-12-06 04:11:36.810856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:24.251 [2024-12-06 04:11:36.810962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70518 ] 00:08:24.510 [2024-12-06 04:11:36.950842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.510 [2024-12-06 04:11:37.029469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.769  [2024-12-06T04:11:37.334Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.769 00:08:25.029 04:11:37 -- dd/posix.sh@93 -- # [[ s1el8kuqrn5x5d750ogyo8kbbmj0ik12qo2omltdp9s54mpt00n5w14ywlhmvo4461on0grpa6h6mopg4pta7ipvflma2hhpcm8akk1sx32dvxly13xk6d4pzo7n3yvbg8me0pt459jb78zd58bke6mmz8q3c6mfcsaxeuefwgpa1u91edk4t2mn93a6992xl0fg04d7gdczj8912hd9td2a2cvk7ow6ysco7iy1fx7cr6o38ihkr94ckh7n81tm4nrgr6yogt67vi7xocl0quymddp9v6co5o7ooif6gmmhgc72v5lq616647m5y1z4sh5kvrak72n8fu7zoq9j2xo45tdccdki8d1knq8jgo1c0fiieagv4af9nu58pmh4r2k9xldlwu8krhfr0p99u4y1xh2e21vqxfsyvjwm8deyzcrh2ygpvxmpycbxrtfusehdt98kv2e5bedjgdthivv0n8souadozlls2hmjvb1e2lncm0jh5r56uv5hh8oq == \s\1\e\l\8\k\u\q\r\n\5\x\5\d\7\5\0\o\g\y\o\8\k\b\b\m\j\0\i\k\1\2\q\o\2\o\m\l\t\d\p\9\s\5\4\m\p\t\0\0\n\5\w\1\4\y\w\l\h\m\v\o\4\4\6\1\o\n\0\g\r\p\a\6\h\6\m\o\p\g\4\p\t\a\7\i\p\v\f\l\m\a\2\h\h\p\c\m\8\a\k\k\1\s\x\3\2\d\v\x\l\y\1\3\x\k\6\d\4\p\z\o\7\n\3\y\v\b\g\8\m\e\0\p\t\4\5\9\j\b\7\8\z\d\5\8\b\k\e\6\m\m\z\8\q\3\c\6\m\f\c\s\a\x\e\u\e\f\w\g\p\a\1\u\9\1\e\d\k\4\t\2\m\n\9\3\a\6\9\9\2\x\l\0\f\g\0\4\d\7\g\d\c\z\j\8\9\1\2\h\d\9\t\d\2\a\2\c\v\k\7\o\w\6\y\s\c\o\7\i\y\1\f\x\7\c\r\6\o\3\8\i\h\k\r\9\4\c\k\h\7\n\8\1\t\m\4\n\r\g\r\6\y\o\g\t\6\7\v\i\7\x\o\c\l\0\q\u\y\m\d\d\p\9\v\6\c\o\5\o\7\o\o\i\f\6\g\m\m\h\g\c\7\2\v\5\l\q\6\1\6\6\4\7\m\5\y\1\z\4\s\h\5\k\v\r\a\k\7\2\n\8\f\u\7\z\o\q\9\j\2\x\o\4\5\t\d\c\c\d\k\i\8\d\1\k\n\q\8\j\g\o\1\c\0\f\i\i\e\a\g\v\4\a\f\9\n\u\5\8\p\m\h\4\r\2\k\9\x\l\d\l\w\u\8\k\r\h\f\r\0\p\9\9\u\4\y\1\x\h\2\e\2\1\v\q\x\f\s\y\v\j\w\m\8\d\e\y\z\c\r\h\2\y\g\p\v\x\m\p\y\c\b\x\r\t\f\u\s\e\h\d\t\9\8\k\v\2\e\5\b\e\d\j\g\d\t\h\i\v\v\0\n\8\s\o\u\a\d\o\z\l\l\s\2\h\m\j\v\b\1\e\2\l\n\c\m\0\j\h\5\r\5\6\u\v\5\h\h\8\o\q ]] 00:08:25.029 04:11:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.029 04:11:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:25.029 [2024-12-06 04:11:37.383435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.029 [2024-12-06 04:11:37.383542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70520 ] 00:08:25.029 [2024-12-06 04:11:37.523462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.289 [2024-12-06 04:11:37.610753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.289  [2024-12-06T04:11:38.112Z] Copying: 512/512 [B] (average 250 kBps) 00:08:25.547 00:08:25.547 04:11:37 -- dd/posix.sh@93 -- # [[ s1el8kuqrn5x5d750ogyo8kbbmj0ik12qo2omltdp9s54mpt00n5w14ywlhmvo4461on0grpa6h6mopg4pta7ipvflma2hhpcm8akk1sx32dvxly13xk6d4pzo7n3yvbg8me0pt459jb78zd58bke6mmz8q3c6mfcsaxeuefwgpa1u91edk4t2mn93a6992xl0fg04d7gdczj8912hd9td2a2cvk7ow6ysco7iy1fx7cr6o38ihkr94ckh7n81tm4nrgr6yogt67vi7xocl0quymddp9v6co5o7ooif6gmmhgc72v5lq616647m5y1z4sh5kvrak72n8fu7zoq9j2xo45tdccdki8d1knq8jgo1c0fiieagv4af9nu58pmh4r2k9xldlwu8krhfr0p99u4y1xh2e21vqxfsyvjwm8deyzcrh2ygpvxmpycbxrtfusehdt98kv2e5bedjgdthivv0n8souadozlls2hmjvb1e2lncm0jh5r56uv5hh8oq == \s\1\e\l\8\k\u\q\r\n\5\x\5\d\7\5\0\o\g\y\o\8\k\b\b\m\j\0\i\k\1\2\q\o\2\o\m\l\t\d\p\9\s\5\4\m\p\t\0\0\n\5\w\1\4\y\w\l\h\m\v\o\4\4\6\1\o\n\0\g\r\p\a\6\h\6\m\o\p\g\4\p\t\a\7\i\p\v\f\l\m\a\2\h\h\p\c\m\8\a\k\k\1\s\x\3\2\d\v\x\l\y\1\3\x\k\6\d\4\p\z\o\7\n\3\y\v\b\g\8\m\e\0\p\t\4\5\9\j\b\7\8\z\d\5\8\b\k\e\6\m\m\z\8\q\3\c\6\m\f\c\s\a\x\e\u\e\f\w\g\p\a\1\u\9\1\e\d\k\4\t\2\m\n\9\3\a\6\9\9\2\x\l\0\f\g\0\4\d\7\g\d\c\z\j\8\9\1\2\h\d\9\t\d\2\a\2\c\v\k\7\o\w\6\y\s\c\o\7\i\y\1\f\x\7\c\r\6\o\3\8\i\h\k\r\9\4\c\k\h\7\n\8\1\t\m\4\n\r\g\r\6\y\o\g\t\6\7\v\i\7\x\o\c\l\0\q\u\y\m\d\d\p\9\v\6\c\o\5\o\7\o\o\i\f\6\g\m\m\h\g\c\7\2\v\5\l\q\6\1\6\6\4\7\m\5\y\1\z\4\s\h\5\k\v\r\a\k\7\2\n\8\f\u\7\z\o\q\9\j\2\x\o\4\5\t\d\c\c\d\k\i\8\d\1\k\n\q\8\j\g\o\1\c\0\f\i\i\e\a\g\v\4\a\f\9\n\u\5\8\p\m\h\4\r\2\k\9\x\l\d\l\w\u\8\k\r\h\f\r\0\p\9\9\u\4\y\1\x\h\2\e\2\1\v\q\x\f\s\y\v\j\w\m\8\d\e\y\z\c\r\h\2\y\g\p\v\x\m\p\y\c\b\x\r\t\f\u\s\e\h\d\t\9\8\k\v\2\e\5\b\e\d\j\g\d\t\h\i\v\v\0\n\8\s\o\u\a\d\o\z\l\l\s\2\h\m\j\v\b\1\e\2\l\n\c\m\0\j\h\5\r\5\6\u\v\5\h\h\8\o\q ]] 00:08:25.547 04:11:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.547 04:11:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:25.547 [2024-12-06 04:11:37.955318] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.547 [2024-12-06 04:11:37.955482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70533 ] 00:08:25.547 [2024-12-06 04:11:38.094859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.806 [2024-12-06 04:11:38.177558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.806  [2024-12-06T04:11:38.630Z] Copying: 512/512 [B] (average 250 kBps) 00:08:26.065 00:08:26.065 ************************************ 00:08:26.065 END TEST dd_flags_misc 00:08:26.065 ************************************ 00:08:26.065 04:11:38 -- dd/posix.sh@93 -- # [[ s1el8kuqrn5x5d750ogyo8kbbmj0ik12qo2omltdp9s54mpt00n5w14ywlhmvo4461on0grpa6h6mopg4pta7ipvflma2hhpcm8akk1sx32dvxly13xk6d4pzo7n3yvbg8me0pt459jb78zd58bke6mmz8q3c6mfcsaxeuefwgpa1u91edk4t2mn93a6992xl0fg04d7gdczj8912hd9td2a2cvk7ow6ysco7iy1fx7cr6o38ihkr94ckh7n81tm4nrgr6yogt67vi7xocl0quymddp9v6co5o7ooif6gmmhgc72v5lq616647m5y1z4sh5kvrak72n8fu7zoq9j2xo45tdccdki8d1knq8jgo1c0fiieagv4af9nu58pmh4r2k9xldlwu8krhfr0p99u4y1xh2e21vqxfsyvjwm8deyzcrh2ygpvxmpycbxrtfusehdt98kv2e5bedjgdthivv0n8souadozlls2hmjvb1e2lncm0jh5r56uv5hh8oq == \s\1\e\l\8\k\u\q\r\n\5\x\5\d\7\5\0\o\g\y\o\8\k\b\b\m\j\0\i\k\1\2\q\o\2\o\m\l\t\d\p\9\s\5\4\m\p\t\0\0\n\5\w\1\4\y\w\l\h\m\v\o\4\4\6\1\o\n\0\g\r\p\a\6\h\6\m\o\p\g\4\p\t\a\7\i\p\v\f\l\m\a\2\h\h\p\c\m\8\a\k\k\1\s\x\3\2\d\v\x\l\y\1\3\x\k\6\d\4\p\z\o\7\n\3\y\v\b\g\8\m\e\0\p\t\4\5\9\j\b\7\8\z\d\5\8\b\k\e\6\m\m\z\8\q\3\c\6\m\f\c\s\a\x\e\u\e\f\w\g\p\a\1\u\9\1\e\d\k\4\t\2\m\n\9\3\a\6\9\9\2\x\l\0\f\g\0\4\d\7\g\d\c\z\j\8\9\1\2\h\d\9\t\d\2\a\2\c\v\k\7\o\w\6\y\s\c\o\7\i\y\1\f\x\7\c\r\6\o\3\8\i\h\k\r\9\4\c\k\h\7\n\8\1\t\m\4\n\r\g\r\6\y\o\g\t\6\7\v\i\7\x\o\c\l\0\q\u\y\m\d\d\p\9\v\6\c\o\5\o\7\o\o\i\f\6\g\m\m\h\g\c\7\2\v\5\l\q\6\1\6\6\4\7\m\5\y\1\z\4\s\h\5\k\v\r\a\k\7\2\n\8\f\u\7\z\o\q\9\j\2\x\o\4\5\t\d\c\c\d\k\i\8\d\1\k\n\q\8\j\g\o\1\c\0\f\i\i\e\a\g\v\4\a\f\9\n\u\5\8\p\m\h\4\r\2\k\9\x\l\d\l\w\u\8\k\r\h\f\r\0\p\9\9\u\4\y\1\x\h\2\e\2\1\v\q\x\f\s\y\v\j\w\m\8\d\e\y\z\c\r\h\2\y\g\p\v\x\m\p\y\c\b\x\r\t\f\u\s\e\h\d\t\9\8\k\v\2\e\5\b\e\d\j\g\d\t\h\i\v\v\0\n\8\s\o\u\a\d\o\z\l\l\s\2\h\m\j\v\b\1\e\2\l\n\c\m\0\j\h\5\r\5\6\u\v\5\h\h\8\o\q ]] 00:08:26.065 00:08:26.065 real 0m4.569s 00:08:26.065 user 0m2.447s 00:08:26.065 sys 0m1.127s 00:08:26.065 04:11:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.065 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.065 04:11:38 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:26.065 04:11:38 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:26.065 * Second test run, disabling liburing, forcing AIO 00:08:26.065 04:11:38 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:26.065 04:11:38 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:26.065 04:11:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.065 04:11:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.065 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.065 ************************************ 00:08:26.065 START TEST dd_flag_append_forced_aio 00:08:26.065 ************************************ 00:08:26.065 04:11:38 -- common/autotest_common.sh@1114 -- # append 00:08:26.065 04:11:38 -- dd/posix.sh@16 -- # local dump0 00:08:26.065 04:11:38 -- dd/posix.sh@17 -- # local dump1 00:08:26.065 04:11:38 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:26.065 04:11:38 -- dd/common.sh@98 -- # xtrace_disable 00:08:26.065 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.065 04:11:38 -- dd/posix.sh@19 -- # dump0=7kts4vogh21voaz6biqs5ywofuu3hpcn 00:08:26.065 04:11:38 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:26.065 04:11:38 -- dd/common.sh@98 -- # xtrace_disable 00:08:26.065 04:11:38 -- common/autotest_common.sh@10 -- # set +x 00:08:26.065 04:11:38 -- dd/posix.sh@20 -- # dump1=1nmzjjc88e48qpp0how4uroee92nqifd 00:08:26.065 04:11:38 -- dd/posix.sh@22 -- # printf %s 7kts4vogh21voaz6biqs5ywofuu3hpcn 00:08:26.065 04:11:38 -- dd/posix.sh@23 -- # printf %s 1nmzjjc88e48qpp0how4uroee92nqifd 00:08:26.065 04:11:38 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:26.065 [2024-12-06 04:11:38.581408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.065 [2024-12-06 04:11:38.581514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:08:26.324 [2024-12-06 04:11:38.720977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.324 [2024-12-06 04:11:38.800484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.324  [2024-12-06T04:11:39.147Z] Copying: 32/32 [B] (average 31 kBps) 00:08:26.582 00:08:26.582 ************************************ 00:08:26.582 END TEST dd_flag_append_forced_aio 00:08:26.582 ************************************ 00:08:26.582 04:11:39 -- dd/posix.sh@27 -- # [[ 1nmzjjc88e48qpp0how4uroee92nqifd7kts4vogh21voaz6biqs5ywofuu3hpcn == \1\n\m\z\j\j\c\8\8\e\4\8\q\p\p\0\h\o\w\4\u\r\o\e\e\9\2\n\q\i\f\d\7\k\t\s\4\v\o\g\h\2\1\v\o\a\z\6\b\i\q\s\5\y\w\o\f\u\u\3\h\p\c\n ]] 00:08:26.582 00:08:26.582 real 0m0.562s 00:08:26.582 user 0m0.292s 00:08:26.582 sys 0m0.147s 00:08:26.582 04:11:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.582 04:11:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.582 04:11:39 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:26.582 04:11:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.582 04:11:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.582 04:11:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.582 ************************************ 00:08:26.582 START TEST dd_flag_directory_forced_aio 00:08:26.582 ************************************ 00:08:26.582 04:11:39 -- common/autotest_common.sh@1114 -- # directory 00:08:26.582 04:11:39 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.582 04:11:39 -- common/autotest_common.sh@650 -- # local es=0 00:08:26.582 04:11:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.582 04:11:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.582 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.582 04:11:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.582 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.582 04:11:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.842 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.842 04:11:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.842 04:11:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.842 04:11:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.842 [2024-12-06 04:11:39.187601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.842 [2024-12-06 04:11:39.187682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70586 ] 00:08:26.842 [2024-12-06 04:11:39.320681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.842 [2024-12-06 04:11:39.396708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.101 [2024-12-06 04:11:39.481340] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.101 [2024-12-06 04:11:39.481452] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.101 [2024-12-06 04:11:39.481484] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.101 [2024-12-06 04:11:39.588889] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:27.360 04:11:39 -- common/autotest_common.sh@653 -- # es=236 00:08:27.360 04:11:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.360 04:11:39 -- common/autotest_common.sh@662 -- # es=108 00:08:27.360 04:11:39 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.360 04:11:39 -- common/autotest_common.sh@670 -- # es=1 00:08:27.360 04:11:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.360 04:11:39 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:27.360 04:11:39 -- common/autotest_common.sh@650 -- # local es=0 00:08:27.360 04:11:39 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:27.360 04:11:39 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.360 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.360 04:11:39 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.360 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.360 04:11:39 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.360 04:11:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.360 04:11:39 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.360 04:11:39 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.360 04:11:39 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:27.360 [2024-12-06 04:11:39.705939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.360 [2024-12-06 04:11:39.706167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70596 ] 00:08:27.360 [2024-12-06 04:11:39.837312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.360 [2024-12-06 04:11:39.909757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.618 [2024-12-06 04:11:39.993079] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.618 [2024-12-06 04:11:39.993463] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:27.619 [2024-12-06 04:11:39.993484] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.619 [2024-12-06 04:11:40.102806] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:27.619 04:11:40 -- common/autotest_common.sh@653 -- # es=236 00:08:27.619 04:11:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.619 04:11:40 -- common/autotest_common.sh@662 -- # es=108 00:08:27.619 04:11:40 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.619 04:11:40 -- common/autotest_common.sh@670 -- # es=1 00:08:27.619 04:11:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.619 00:08:27.619 real 0m1.041s 00:08:27.619 user 0m0.550s 00:08:27.619 sys 0m0.281s 00:08:27.619 04:11:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.619 ************************************ 00:08:27.619 END TEST dd_flag_directory_forced_aio 00:08:27.619 ************************************ 00:08:27.619 04:11:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.877 04:11:40 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:27.877 04:11:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:27.877 04:11:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.877 04:11:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.877 ************************************ 00:08:27.877 START TEST dd_flag_nofollow_forced_aio 00:08:27.877 ************************************ 00:08:27.877 04:11:40 -- common/autotest_common.sh@1114 -- # nofollow 00:08:27.877 04:11:40 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:27.877 04:11:40 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:27.877 04:11:40 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:27.877 04:11:40 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:27.877 04:11:40 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.877 04:11:40 -- common/autotest_common.sh@650 -- # local es=0 00:08:27.877 04:11:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.877 04:11:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.877 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.877 04:11:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.877 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.877 04:11:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.878 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.878 04:11:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.878 04:11:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.878 04:11:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.878 [2024-12-06 04:11:40.294289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.878 [2024-12-06 04:11:40.294456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70624 ] 00:08:27.878 [2024-12-06 04:11:40.435768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.137 [2024-12-06 04:11:40.520712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.137 [2024-12-06 04:11:40.603936] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:28.137 [2024-12-06 04:11:40.603997] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:28.137 [2024-12-06 04:11:40.604030] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.396 [2024-12-06 04:11:40.714420] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:28.396 04:11:40 -- common/autotest_common.sh@653 -- # es=216 00:08:28.396 04:11:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.396 04:11:40 -- common/autotest_common.sh@662 -- # es=88 00:08:28.396 04:11:40 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:28.396 04:11:40 -- common/autotest_common.sh@670 -- # es=1 00:08:28.396 04:11:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.396 04:11:40 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.396 04:11:40 -- common/autotest_common.sh@650 -- # local es=0 00:08:28.396 04:11:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.396 04:11:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.396 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.396 04:11:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.396 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.396 04:11:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.396 04:11:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.396 04:11:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.396 04:11:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.396 04:11:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:28.396 [2024-12-06 04:11:40.853306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.396 [2024-12-06 04:11:40.853425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70638 ] 00:08:28.656 [2024-12-06 04:11:40.992004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.656 [2024-12-06 04:11:41.071552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.656 [2024-12-06 04:11:41.154946] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:28.656 [2024-12-06 04:11:41.155006] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:28.656 [2024-12-06 04:11:41.155038] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.914 [2024-12-06 04:11:41.265360] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:28.914 04:11:41 -- common/autotest_common.sh@653 -- # es=216 00:08:28.914 04:11:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.914 04:11:41 -- common/autotest_common.sh@662 -- # es=88 00:08:28.914 04:11:41 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:28.914 04:11:41 -- common/autotest_common.sh@670 -- # es=1 00:08:28.914 04:11:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.914 04:11:41 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:28.914 04:11:41 -- dd/common.sh@98 -- # xtrace_disable 00:08:28.914 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:28.914 04:11:41 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.914 [2024-12-06 04:11:41.401344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.914 [2024-12-06 04:11:41.401498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70641 ] 00:08:29.173 [2024-12-06 04:11:41.540202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.173 [2024-12-06 04:11:41.618041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.173  [2024-12-06T04:11:41.997Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.432 00:08:29.432 ************************************ 00:08:29.432 END TEST dd_flag_nofollow_forced_aio 00:08:29.432 ************************************ 00:08:29.432 04:11:41 -- dd/posix.sh@49 -- # [[ s5989lx0rtxsu4ejjpdbyu7vi72l60bmzxpbd5f5qaj2wh7701yvbwxmducemts90g0wmbo04w0ukokjbz5d6jose3pjlxvierv3b1sww44z5o0mgmcwsrg7a6lt13wz8mhuivomd3kije0eae1p3zb70elwqlw0vflfbd6q5yy6dhcj8zhuzo2vxat1taoq9pfk0mwptctqen0sao687mwy25uiswrx2ro74h3xh5c0xsdlhh9lf3agflfmxxdlu0ppxgxkj8lqmhbyv23uihv466q4amx2h18qyumx3brgww1335yx9oxaw52d0g324kejy17tumca1rnxnwfe1aw7zhtwy7dmvotaqcrv0trzq9ycdtt8m3wb45ncpzo0tu5sc4lp3098ea7o8zwwzo5zunst231vc52gungw2isw2uy5y5ue05lnrib2e2rej6l92sn117e5vhqb1fm9lnifc4oeh53laojz2ekr6noh1w8p39b3rd8qy6w1jyye == \s\5\9\8\9\l\x\0\r\t\x\s\u\4\e\j\j\p\d\b\y\u\7\v\i\7\2\l\6\0\b\m\z\x\p\b\d\5\f\5\q\a\j\2\w\h\7\7\0\1\y\v\b\w\x\m\d\u\c\e\m\t\s\9\0\g\0\w\m\b\o\0\4\w\0\u\k\o\k\j\b\z\5\d\6\j\o\s\e\3\p\j\l\x\v\i\e\r\v\3\b\1\s\w\w\4\4\z\5\o\0\m\g\m\c\w\s\r\g\7\a\6\l\t\1\3\w\z\8\m\h\u\i\v\o\m\d\3\k\i\j\e\0\e\a\e\1\p\3\z\b\7\0\e\l\w\q\l\w\0\v\f\l\f\b\d\6\q\5\y\y\6\d\h\c\j\8\z\h\u\z\o\2\v\x\a\t\1\t\a\o\q\9\p\f\k\0\m\w\p\t\c\t\q\e\n\0\s\a\o\6\8\7\m\w\y\2\5\u\i\s\w\r\x\2\r\o\7\4\h\3\x\h\5\c\0\x\s\d\l\h\h\9\l\f\3\a\g\f\l\f\m\x\x\d\l\u\0\p\p\x\g\x\k\j\8\l\q\m\h\b\y\v\2\3\u\i\h\v\4\6\6\q\4\a\m\x\2\h\1\8\q\y\u\m\x\3\b\r\g\w\w\1\3\3\5\y\x\9\o\x\a\w\5\2\d\0\g\3\2\4\k\e\j\y\1\7\t\u\m\c\a\1\r\n\x\n\w\f\e\1\a\w\7\z\h\t\w\y\7\d\m\v\o\t\a\q\c\r\v\0\t\r\z\q\9\y\c\d\t\t\8\m\3\w\b\4\5\n\c\p\z\o\0\t\u\5\s\c\4\l\p\3\0\9\8\e\a\7\o\8\z\w\w\z\o\5\z\u\n\s\t\2\3\1\v\c\5\2\g\u\n\g\w\2\i\s\w\2\u\y\5\y\5\u\e\0\5\l\n\r\i\b\2\e\2\r\e\j\6\l\9\2\s\n\1\1\7\e\5\v\h\q\b\1\f\m\9\l\n\i\f\c\4\o\e\h\5\3\l\a\o\j\z\2\e\k\r\6\n\o\h\1\w\8\p\3\9\b\3\r\d\8\q\y\6\w\1\j\y\y\e ]] 00:08:29.432 00:08:29.432 real 0m1.686s 00:08:29.432 user 0m0.909s 00:08:29.432 sys 0m0.444s 00:08:29.432 04:11:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:29.432 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 04:11:41 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:29.432 04:11:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:29.432 04:11:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.432 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 ************************************ 00:08:29.432 START TEST dd_flag_noatime_forced_aio 00:08:29.432 ************************************ 00:08:29.432 04:11:41 -- common/autotest_common.sh@1114 -- # noatime 00:08:29.432 04:11:41 -- dd/posix.sh@53 -- # local atime_if 00:08:29.432 04:11:41 -- dd/posix.sh@54 -- # local atime_of 00:08:29.432 04:11:41 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:29.432 04:11:41 -- dd/common.sh@98 -- # xtrace_disable 00:08:29.432 04:11:41 -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 04:11:41 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.432 04:11:41 -- dd/posix.sh@60 -- # atime_if=1733458301 00:08:29.432 04:11:41 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.432 04:11:41 -- dd/posix.sh@61 -- # atime_of=1733458301 00:08:29.432 04:11:41 -- dd/posix.sh@66 -- # sleep 1 00:08:30.811 04:11:42 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:30.811 [2024-12-06 04:11:43.032384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.811 [2024-12-06 04:11:43.032539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70687 ] 00:08:30.811 [2024-12-06 04:11:43.166952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.811 [2024-12-06 04:11:43.257952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.811  [2024-12-06T04:11:43.639Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.074 00:08:31.074 04:11:43 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.074 04:11:43 -- dd/posix.sh@69 -- # (( atime_if == 1733458301 )) 00:08:31.074 04:11:43 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.074 04:11:43 -- dd/posix.sh@70 -- # (( atime_of == 1733458301 )) 00:08:31.074 04:11:43 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.074 [2024-12-06 04:11:43.616723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.074 [2024-12-06 04:11:43.616836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:08:31.336 [2024-12-06 04:11:43.757563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.336 [2024-12-06 04:11:43.847004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.594  [2024-12-06T04:11:44.159Z] Copying: 512/512 [B] (average 500 kBps) 00:08:31.594 00:08:31.853 04:11:44 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:31.853 04:11:44 -- dd/posix.sh@73 -- # (( atime_if < 1733458303 )) 00:08:31.853 00:08:31.853 real 0m2.196s 00:08:31.853 user 0m0.635s 00:08:31.853 sys 0m0.315s 00:08:31.853 ************************************ 00:08:31.853 END TEST dd_flag_noatime_forced_aio 00:08:31.853 ************************************ 00:08:31.853 04:11:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.853 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 04:11:44 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:31.853 04:11:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:31.853 04:11:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.853 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 ************************************ 00:08:31.853 START TEST dd_flags_misc_forced_aio 00:08:31.853 ************************************ 00:08:31.853 04:11:44 -- common/autotest_common.sh@1114 -- # io 00:08:31.853 04:11:44 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:31.853 04:11:44 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:31.853 04:11:44 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:31.853 04:11:44 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:31.853 04:11:44 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:31.853 04:11:44 -- dd/common.sh@98 -- # xtrace_disable 00:08:31.853 04:11:44 -- common/autotest_common.sh@10 -- # set +x 00:08:31.853 04:11:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.853 04:11:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:31.853 [2024-12-06 04:11:44.274728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.853 [2024-12-06 04:11:44.275037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70725 ] 00:08:31.853 [2024-12-06 04:11:44.415174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.113 [2024-12-06 04:11:44.503603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.113  [2024-12-06T04:11:44.937Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.372 00:08:32.372 04:11:44 -- dd/posix.sh@93 -- # [[ uludfktw48y019v1z6e1g8t7idxdmt8jgro0avyyx7bhltjgd91vrlfupznwoitac84s6iqv9kq1rwdrcx5sogphavaud8b2qh0z94r7oy7o6fweuad1jdo6it6phjbygi592ao2jv22w3v9eczq91zju94xo5f4kvv6kqokgwb2khmp7jlu45yl6mz17y5bols6hx94tp13vko7p8g0zjcenpituiuzfcnkoe019ox7l90x8j4ew7v2vtw5givmt9exxjhgxdb1riuhq2l2dhrox7oncse00unz3ggq88w11x3d2mmvo5k5dp1mpe2s52u1wisu3xi3dz8ogck8903mee2ebzvyi1n237fbtwrw142pydp8axjw2nsprxolo962owrttcedq4tktuf8it3y7a6si5lmykniue2po5no2wvf0ugkjvpxftps8ejrodkgec901is9gs9sndgwomv1k1n79mbz09hjvp5zjtqqkuwoqz7gn0w3xdfc8v1m == \u\l\u\d\f\k\t\w\4\8\y\0\1\9\v\1\z\6\e\1\g\8\t\7\i\d\x\d\m\t\8\j\g\r\o\0\a\v\y\y\x\7\b\h\l\t\j\g\d\9\1\v\r\l\f\u\p\z\n\w\o\i\t\a\c\8\4\s\6\i\q\v\9\k\q\1\r\w\d\r\c\x\5\s\o\g\p\h\a\v\a\u\d\8\b\2\q\h\0\z\9\4\r\7\o\y\7\o\6\f\w\e\u\a\d\1\j\d\o\6\i\t\6\p\h\j\b\y\g\i\5\9\2\a\o\2\j\v\2\2\w\3\v\9\e\c\z\q\9\1\z\j\u\9\4\x\o\5\f\4\k\v\v\6\k\q\o\k\g\w\b\2\k\h\m\p\7\j\l\u\4\5\y\l\6\m\z\1\7\y\5\b\o\l\s\6\h\x\9\4\t\p\1\3\v\k\o\7\p\8\g\0\z\j\c\e\n\p\i\t\u\i\u\z\f\c\n\k\o\e\0\1\9\o\x\7\l\9\0\x\8\j\4\e\w\7\v\2\v\t\w\5\g\i\v\m\t\9\e\x\x\j\h\g\x\d\b\1\r\i\u\h\q\2\l\2\d\h\r\o\x\7\o\n\c\s\e\0\0\u\n\z\3\g\g\q\8\8\w\1\1\x\3\d\2\m\m\v\o\5\k\5\d\p\1\m\p\e\2\s\5\2\u\1\w\i\s\u\3\x\i\3\d\z\8\o\g\c\k\8\9\0\3\m\e\e\2\e\b\z\v\y\i\1\n\2\3\7\f\b\t\w\r\w\1\4\2\p\y\d\p\8\a\x\j\w\2\n\s\p\r\x\o\l\o\9\6\2\o\w\r\t\t\c\e\d\q\4\t\k\t\u\f\8\i\t\3\y\7\a\6\s\i\5\l\m\y\k\n\i\u\e\2\p\o\5\n\o\2\w\v\f\0\u\g\k\j\v\p\x\f\t\p\s\8\e\j\r\o\d\k\g\e\c\9\0\1\i\s\9\g\s\9\s\n\d\g\w\o\m\v\1\k\1\n\7\9\m\b\z\0\9\h\j\v\p\5\z\j\t\q\q\k\u\w\o\q\z\7\g\n\0\w\3\x\d\f\c\8\v\1\m ]] 00:08:32.372 04:11:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.372 04:11:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:32.372 [2024-12-06 04:11:44.850490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.372 [2024-12-06 04:11:44.850774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:08:32.632 [2024-12-06 04:11:44.990217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.632 [2024-12-06 04:11:45.053437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.632  [2024-12-06T04:11:45.459Z] Copying: 512/512 [B] (average 500 kBps) 00:08:32.894 00:08:32.894 04:11:45 -- dd/posix.sh@93 -- # [[ uludfktw48y019v1z6e1g8t7idxdmt8jgro0avyyx7bhltjgd91vrlfupznwoitac84s6iqv9kq1rwdrcx5sogphavaud8b2qh0z94r7oy7o6fweuad1jdo6it6phjbygi592ao2jv22w3v9eczq91zju94xo5f4kvv6kqokgwb2khmp7jlu45yl6mz17y5bols6hx94tp13vko7p8g0zjcenpituiuzfcnkoe019ox7l90x8j4ew7v2vtw5givmt9exxjhgxdb1riuhq2l2dhrox7oncse00unz3ggq88w11x3d2mmvo5k5dp1mpe2s52u1wisu3xi3dz8ogck8903mee2ebzvyi1n237fbtwrw142pydp8axjw2nsprxolo962owrttcedq4tktuf8it3y7a6si5lmykniue2po5no2wvf0ugkjvpxftps8ejrodkgec901is9gs9sndgwomv1k1n79mbz09hjvp5zjtqqkuwoqz7gn0w3xdfc8v1m == \u\l\u\d\f\k\t\w\4\8\y\0\1\9\v\1\z\6\e\1\g\8\t\7\i\d\x\d\m\t\8\j\g\r\o\0\a\v\y\y\x\7\b\h\l\t\j\g\d\9\1\v\r\l\f\u\p\z\n\w\o\i\t\a\c\8\4\s\6\i\q\v\9\k\q\1\r\w\d\r\c\x\5\s\o\g\p\h\a\v\a\u\d\8\b\2\q\h\0\z\9\4\r\7\o\y\7\o\6\f\w\e\u\a\d\1\j\d\o\6\i\t\6\p\h\j\b\y\g\i\5\9\2\a\o\2\j\v\2\2\w\3\v\9\e\c\z\q\9\1\z\j\u\9\4\x\o\5\f\4\k\v\v\6\k\q\o\k\g\w\b\2\k\h\m\p\7\j\l\u\4\5\y\l\6\m\z\1\7\y\5\b\o\l\s\6\h\x\9\4\t\p\1\3\v\k\o\7\p\8\g\0\z\j\c\e\n\p\i\t\u\i\u\z\f\c\n\k\o\e\0\1\9\o\x\7\l\9\0\x\8\j\4\e\w\7\v\2\v\t\w\5\g\i\v\m\t\9\e\x\x\j\h\g\x\d\b\1\r\i\u\h\q\2\l\2\d\h\r\o\x\7\o\n\c\s\e\0\0\u\n\z\3\g\g\q\8\8\w\1\1\x\3\d\2\m\m\v\o\5\k\5\d\p\1\m\p\e\2\s\5\2\u\1\w\i\s\u\3\x\i\3\d\z\8\o\g\c\k\8\9\0\3\m\e\e\2\e\b\z\v\y\i\1\n\2\3\7\f\b\t\w\r\w\1\4\2\p\y\d\p\8\a\x\j\w\2\n\s\p\r\x\o\l\o\9\6\2\o\w\r\t\t\c\e\d\q\4\t\k\t\u\f\8\i\t\3\y\7\a\6\s\i\5\l\m\y\k\n\i\u\e\2\p\o\5\n\o\2\w\v\f\0\u\g\k\j\v\p\x\f\t\p\s\8\e\j\r\o\d\k\g\e\c\9\0\1\i\s\9\g\s\9\s\n\d\g\w\o\m\v\1\k\1\n\7\9\m\b\z\0\9\h\j\v\p\5\z\j\t\q\q\k\u\w\o\q\z\7\g\n\0\w\3\x\d\f\c\8\v\1\m ]] 00:08:32.894 04:11:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:32.894 04:11:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:32.894 [2024-12-06 04:11:45.413275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.894 [2024-12-06 04:11:45.413376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70740 ] 00:08:33.157 [2024-12-06 04:11:45.551008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.157 [2024-12-06 04:11:45.627308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.157  [2024-12-06T04:11:45.980Z] Copying: 512/512 [B] (average 166 kBps) 00:08:33.415 00:08:33.415 04:11:45 -- dd/posix.sh@93 -- # [[ uludfktw48y019v1z6e1g8t7idxdmt8jgro0avyyx7bhltjgd91vrlfupznwoitac84s6iqv9kq1rwdrcx5sogphavaud8b2qh0z94r7oy7o6fweuad1jdo6it6phjbygi592ao2jv22w3v9eczq91zju94xo5f4kvv6kqokgwb2khmp7jlu45yl6mz17y5bols6hx94tp13vko7p8g0zjcenpituiuzfcnkoe019ox7l90x8j4ew7v2vtw5givmt9exxjhgxdb1riuhq2l2dhrox7oncse00unz3ggq88w11x3d2mmvo5k5dp1mpe2s52u1wisu3xi3dz8ogck8903mee2ebzvyi1n237fbtwrw142pydp8axjw2nsprxolo962owrttcedq4tktuf8it3y7a6si5lmykniue2po5no2wvf0ugkjvpxftps8ejrodkgec901is9gs9sndgwomv1k1n79mbz09hjvp5zjtqqkuwoqz7gn0w3xdfc8v1m == \u\l\u\d\f\k\t\w\4\8\y\0\1\9\v\1\z\6\e\1\g\8\t\7\i\d\x\d\m\t\8\j\g\r\o\0\a\v\y\y\x\7\b\h\l\t\j\g\d\9\1\v\r\l\f\u\p\z\n\w\o\i\t\a\c\8\4\s\6\i\q\v\9\k\q\1\r\w\d\r\c\x\5\s\o\g\p\h\a\v\a\u\d\8\b\2\q\h\0\z\9\4\r\7\o\y\7\o\6\f\w\e\u\a\d\1\j\d\o\6\i\t\6\p\h\j\b\y\g\i\5\9\2\a\o\2\j\v\2\2\w\3\v\9\e\c\z\q\9\1\z\j\u\9\4\x\o\5\f\4\k\v\v\6\k\q\o\k\g\w\b\2\k\h\m\p\7\j\l\u\4\5\y\l\6\m\z\1\7\y\5\b\o\l\s\6\h\x\9\4\t\p\1\3\v\k\o\7\p\8\g\0\z\j\c\e\n\p\i\t\u\i\u\z\f\c\n\k\o\e\0\1\9\o\x\7\l\9\0\x\8\j\4\e\w\7\v\2\v\t\w\5\g\i\v\m\t\9\e\x\x\j\h\g\x\d\b\1\r\i\u\h\q\2\l\2\d\h\r\o\x\7\o\n\c\s\e\0\0\u\n\z\3\g\g\q\8\8\w\1\1\x\3\d\2\m\m\v\o\5\k\5\d\p\1\m\p\e\2\s\5\2\u\1\w\i\s\u\3\x\i\3\d\z\8\o\g\c\k\8\9\0\3\m\e\e\2\e\b\z\v\y\i\1\n\2\3\7\f\b\t\w\r\w\1\4\2\p\y\d\p\8\a\x\j\w\2\n\s\p\r\x\o\l\o\9\6\2\o\w\r\t\t\c\e\d\q\4\t\k\t\u\f\8\i\t\3\y\7\a\6\s\i\5\l\m\y\k\n\i\u\e\2\p\o\5\n\o\2\w\v\f\0\u\g\k\j\v\p\x\f\t\p\s\8\e\j\r\o\d\k\g\e\c\9\0\1\i\s\9\g\s\9\s\n\d\g\w\o\m\v\1\k\1\n\7\9\m\b\z\0\9\h\j\v\p\5\z\j\t\q\q\k\u\w\o\q\z\7\g\n\0\w\3\x\d\f\c\8\v\1\m ]] 00:08:33.415 04:11:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.415 04:11:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:33.415 [2024-12-06 04:11:45.977380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.416 [2024-12-06 04:11:45.977524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70748 ] 00:08:33.674 [2024-12-06 04:11:46.115124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.674 [2024-12-06 04:11:46.198140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.933  [2024-12-06T04:11:46.498Z] Copying: 512/512 [B] (average 500 kBps) 00:08:33.933 00:08:33.933 04:11:46 -- dd/posix.sh@93 -- # [[ uludfktw48y019v1z6e1g8t7idxdmt8jgro0avyyx7bhltjgd91vrlfupznwoitac84s6iqv9kq1rwdrcx5sogphavaud8b2qh0z94r7oy7o6fweuad1jdo6it6phjbygi592ao2jv22w3v9eczq91zju94xo5f4kvv6kqokgwb2khmp7jlu45yl6mz17y5bols6hx94tp13vko7p8g0zjcenpituiuzfcnkoe019ox7l90x8j4ew7v2vtw5givmt9exxjhgxdb1riuhq2l2dhrox7oncse00unz3ggq88w11x3d2mmvo5k5dp1mpe2s52u1wisu3xi3dz8ogck8903mee2ebzvyi1n237fbtwrw142pydp8axjw2nsprxolo962owrttcedq4tktuf8it3y7a6si5lmykniue2po5no2wvf0ugkjvpxftps8ejrodkgec901is9gs9sndgwomv1k1n79mbz09hjvp5zjtqqkuwoqz7gn0w3xdfc8v1m == \u\l\u\d\f\k\t\w\4\8\y\0\1\9\v\1\z\6\e\1\g\8\t\7\i\d\x\d\m\t\8\j\g\r\o\0\a\v\y\y\x\7\b\h\l\t\j\g\d\9\1\v\r\l\f\u\p\z\n\w\o\i\t\a\c\8\4\s\6\i\q\v\9\k\q\1\r\w\d\r\c\x\5\s\o\g\p\h\a\v\a\u\d\8\b\2\q\h\0\z\9\4\r\7\o\y\7\o\6\f\w\e\u\a\d\1\j\d\o\6\i\t\6\p\h\j\b\y\g\i\5\9\2\a\o\2\j\v\2\2\w\3\v\9\e\c\z\q\9\1\z\j\u\9\4\x\o\5\f\4\k\v\v\6\k\q\o\k\g\w\b\2\k\h\m\p\7\j\l\u\4\5\y\l\6\m\z\1\7\y\5\b\o\l\s\6\h\x\9\4\t\p\1\3\v\k\o\7\p\8\g\0\z\j\c\e\n\p\i\t\u\i\u\z\f\c\n\k\o\e\0\1\9\o\x\7\l\9\0\x\8\j\4\e\w\7\v\2\v\t\w\5\g\i\v\m\t\9\e\x\x\j\h\g\x\d\b\1\r\i\u\h\q\2\l\2\d\h\r\o\x\7\o\n\c\s\e\0\0\u\n\z\3\g\g\q\8\8\w\1\1\x\3\d\2\m\m\v\o\5\k\5\d\p\1\m\p\e\2\s\5\2\u\1\w\i\s\u\3\x\i\3\d\z\8\o\g\c\k\8\9\0\3\m\e\e\2\e\b\z\v\y\i\1\n\2\3\7\f\b\t\w\r\w\1\4\2\p\y\d\p\8\a\x\j\w\2\n\s\p\r\x\o\l\o\9\6\2\o\w\r\t\t\c\e\d\q\4\t\k\t\u\f\8\i\t\3\y\7\a\6\s\i\5\l\m\y\k\n\i\u\e\2\p\o\5\n\o\2\w\v\f\0\u\g\k\j\v\p\x\f\t\p\s\8\e\j\r\o\d\k\g\e\c\9\0\1\i\s\9\g\s\9\s\n\d\g\w\o\m\v\1\k\1\n\7\9\m\b\z\0\9\h\j\v\p\5\z\j\t\q\q\k\u\w\o\q\z\7\g\n\0\w\3\x\d\f\c\8\v\1\m ]] 00:08:33.933 04:11:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:33.933 04:11:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:34.192 04:11:46 -- dd/common.sh@98 -- # xtrace_disable 00:08:34.192 04:11:46 -- common/autotest_common.sh@10 -- # set +x 00:08:34.192 04:11:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.192 04:11:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:34.192 [2024-12-06 04:11:46.553489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.192 [2024-12-06 04:11:46.553761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:08:34.192 [2024-12-06 04:11:46.695513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.451 [2024-12-06 04:11:46.759448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.451  [2024-12-06T04:11:47.275Z] Copying: 512/512 [B] (average 500 kBps) 00:08:34.710 00:08:34.710 04:11:47 -- dd/posix.sh@93 -- # [[ ahjvjwupmki6c38qsaimo0q8izr9uhpar1t34krogsg4hp8157hok37qc4t9nusa5h29xuntl3mf5ibqmm973y0h6cxgz29uunsiwvqgnpdvctooeczgp0jkc6c3e0bxc8dfq998jb0tmb4kylb2nnc53mmh4dlb6rqdfdj4xkkp6ecubw0abb2gtia7wcdbrx28qtllj70s8qkv6k81pg4ln8q05l5siiy4zspfs0rwiveuc7f8ls2hfhpc9ywuxwcqd9v1i7leia1jb5gjn8hfgg8w9139yh23bvyv32f45kn9xsdmd65zeni2qhno61qbbh5861m0eibqyd3i1fjx88s7vqg2j30nv6bvtvym6qlwwkpv5o82m6a8wku5txipzt1vm5t9opoxrujh10t202wu25s8mjr9n1qkehanjql4scy750k5mnndmt5xrvf7ediwufvk9yu3uxb4falrxw2hue4swetuowgrfk5twdoy1bvbzzg2ez9r76hh == \a\h\j\v\j\w\u\p\m\k\i\6\c\3\8\q\s\a\i\m\o\0\q\8\i\z\r\9\u\h\p\a\r\1\t\3\4\k\r\o\g\s\g\4\h\p\8\1\5\7\h\o\k\3\7\q\c\4\t\9\n\u\s\a\5\h\2\9\x\u\n\t\l\3\m\f\5\i\b\q\m\m\9\7\3\y\0\h\6\c\x\g\z\2\9\u\u\n\s\i\w\v\q\g\n\p\d\v\c\t\o\o\e\c\z\g\p\0\j\k\c\6\c\3\e\0\b\x\c\8\d\f\q\9\9\8\j\b\0\t\m\b\4\k\y\l\b\2\n\n\c\5\3\m\m\h\4\d\l\b\6\r\q\d\f\d\j\4\x\k\k\p\6\e\c\u\b\w\0\a\b\b\2\g\t\i\a\7\w\c\d\b\r\x\2\8\q\t\l\l\j\7\0\s\8\q\k\v\6\k\8\1\p\g\4\l\n\8\q\0\5\l\5\s\i\i\y\4\z\s\p\f\s\0\r\w\i\v\e\u\c\7\f\8\l\s\2\h\f\h\p\c\9\y\w\u\x\w\c\q\d\9\v\1\i\7\l\e\i\a\1\j\b\5\g\j\n\8\h\f\g\g\8\w\9\1\3\9\y\h\2\3\b\v\y\v\3\2\f\4\5\k\n\9\x\s\d\m\d\6\5\z\e\n\i\2\q\h\n\o\6\1\q\b\b\h\5\8\6\1\m\0\e\i\b\q\y\d\3\i\1\f\j\x\8\8\s\7\v\q\g\2\j\3\0\n\v\6\b\v\t\v\y\m\6\q\l\w\w\k\p\v\5\o\8\2\m\6\a\8\w\k\u\5\t\x\i\p\z\t\1\v\m\5\t\9\o\p\o\x\r\u\j\h\1\0\t\2\0\2\w\u\2\5\s\8\m\j\r\9\n\1\q\k\e\h\a\n\j\q\l\4\s\c\y\7\5\0\k\5\m\n\n\d\m\t\5\x\r\v\f\7\e\d\i\w\u\f\v\k\9\y\u\3\u\x\b\4\f\a\l\r\x\w\2\h\u\e\4\s\w\e\t\u\o\w\g\r\f\k\5\t\w\d\o\y\1\b\v\b\z\z\g\2\e\z\9\r\7\6\h\h ]] 00:08:34.710 04:11:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:34.710 04:11:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:34.710 [2024-12-06 04:11:47.095452] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.710 [2024-12-06 04:11:47.095561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70763 ] 00:08:34.710 [2024-12-06 04:11:47.234559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.986 [2024-12-06 04:11:47.302247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.986  [2024-12-06T04:11:47.823Z] Copying: 512/512 [B] (average 500 kBps) 00:08:35.258 00:08:35.258 04:11:47 -- dd/posix.sh@93 -- # [[ ahjvjwupmki6c38qsaimo0q8izr9uhpar1t34krogsg4hp8157hok37qc4t9nusa5h29xuntl3mf5ibqmm973y0h6cxgz29uunsiwvqgnpdvctooeczgp0jkc6c3e0bxc8dfq998jb0tmb4kylb2nnc53mmh4dlb6rqdfdj4xkkp6ecubw0abb2gtia7wcdbrx28qtllj70s8qkv6k81pg4ln8q05l5siiy4zspfs0rwiveuc7f8ls2hfhpc9ywuxwcqd9v1i7leia1jb5gjn8hfgg8w9139yh23bvyv32f45kn9xsdmd65zeni2qhno61qbbh5861m0eibqyd3i1fjx88s7vqg2j30nv6bvtvym6qlwwkpv5o82m6a8wku5txipzt1vm5t9opoxrujh10t202wu25s8mjr9n1qkehanjql4scy750k5mnndmt5xrvf7ediwufvk9yu3uxb4falrxw2hue4swetuowgrfk5twdoy1bvbzzg2ez9r76hh == \a\h\j\v\j\w\u\p\m\k\i\6\c\3\8\q\s\a\i\m\o\0\q\8\i\z\r\9\u\h\p\a\r\1\t\3\4\k\r\o\g\s\g\4\h\p\8\1\5\7\h\o\k\3\7\q\c\4\t\9\n\u\s\a\5\h\2\9\x\u\n\t\l\3\m\f\5\i\b\q\m\m\9\7\3\y\0\h\6\c\x\g\z\2\9\u\u\n\s\i\w\v\q\g\n\p\d\v\c\t\o\o\e\c\z\g\p\0\j\k\c\6\c\3\e\0\b\x\c\8\d\f\q\9\9\8\j\b\0\t\m\b\4\k\y\l\b\2\n\n\c\5\3\m\m\h\4\d\l\b\6\r\q\d\f\d\j\4\x\k\k\p\6\e\c\u\b\w\0\a\b\b\2\g\t\i\a\7\w\c\d\b\r\x\2\8\q\t\l\l\j\7\0\s\8\q\k\v\6\k\8\1\p\g\4\l\n\8\q\0\5\l\5\s\i\i\y\4\z\s\p\f\s\0\r\w\i\v\e\u\c\7\f\8\l\s\2\h\f\h\p\c\9\y\w\u\x\w\c\q\d\9\v\1\i\7\l\e\i\a\1\j\b\5\g\j\n\8\h\f\g\g\8\w\9\1\3\9\y\h\2\3\b\v\y\v\3\2\f\4\5\k\n\9\x\s\d\m\d\6\5\z\e\n\i\2\q\h\n\o\6\1\q\b\b\h\5\8\6\1\m\0\e\i\b\q\y\d\3\i\1\f\j\x\8\8\s\7\v\q\g\2\j\3\0\n\v\6\b\v\t\v\y\m\6\q\l\w\w\k\p\v\5\o\8\2\m\6\a\8\w\k\u\5\t\x\i\p\z\t\1\v\m\5\t\9\o\p\o\x\r\u\j\h\1\0\t\2\0\2\w\u\2\5\s\8\m\j\r\9\n\1\q\k\e\h\a\n\j\q\l\4\s\c\y\7\5\0\k\5\m\n\n\d\m\t\5\x\r\v\f\7\e\d\i\w\u\f\v\k\9\y\u\3\u\x\b\4\f\a\l\r\x\w\2\h\u\e\4\s\w\e\t\u\o\w\g\r\f\k\5\t\w\d\o\y\1\b\v\b\z\z\g\2\e\z\9\r\7\6\h\h ]] 00:08:35.258 04:11:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.258 04:11:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:35.258 [2024-12-06 04:11:47.629722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.258 [2024-12-06 04:11:47.629831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70776 ] 00:08:35.258 [2024-12-06 04:11:47.760760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.516 [2024-12-06 04:11:47.829438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.516  [2024-12-06T04:11:48.340Z] Copying: 512/512 [B] (average 500 kBps) 00:08:35.775 00:08:35.775 04:11:48 -- dd/posix.sh@93 -- # [[ ahjvjwupmki6c38qsaimo0q8izr9uhpar1t34krogsg4hp8157hok37qc4t9nusa5h29xuntl3mf5ibqmm973y0h6cxgz29uunsiwvqgnpdvctooeczgp0jkc6c3e0bxc8dfq998jb0tmb4kylb2nnc53mmh4dlb6rqdfdj4xkkp6ecubw0abb2gtia7wcdbrx28qtllj70s8qkv6k81pg4ln8q05l5siiy4zspfs0rwiveuc7f8ls2hfhpc9ywuxwcqd9v1i7leia1jb5gjn8hfgg8w9139yh23bvyv32f45kn9xsdmd65zeni2qhno61qbbh5861m0eibqyd3i1fjx88s7vqg2j30nv6bvtvym6qlwwkpv5o82m6a8wku5txipzt1vm5t9opoxrujh10t202wu25s8mjr9n1qkehanjql4scy750k5mnndmt5xrvf7ediwufvk9yu3uxb4falrxw2hue4swetuowgrfk5twdoy1bvbzzg2ez9r76hh == \a\h\j\v\j\w\u\p\m\k\i\6\c\3\8\q\s\a\i\m\o\0\q\8\i\z\r\9\u\h\p\a\r\1\t\3\4\k\r\o\g\s\g\4\h\p\8\1\5\7\h\o\k\3\7\q\c\4\t\9\n\u\s\a\5\h\2\9\x\u\n\t\l\3\m\f\5\i\b\q\m\m\9\7\3\y\0\h\6\c\x\g\z\2\9\u\u\n\s\i\w\v\q\g\n\p\d\v\c\t\o\o\e\c\z\g\p\0\j\k\c\6\c\3\e\0\b\x\c\8\d\f\q\9\9\8\j\b\0\t\m\b\4\k\y\l\b\2\n\n\c\5\3\m\m\h\4\d\l\b\6\r\q\d\f\d\j\4\x\k\k\p\6\e\c\u\b\w\0\a\b\b\2\g\t\i\a\7\w\c\d\b\r\x\2\8\q\t\l\l\j\7\0\s\8\q\k\v\6\k\8\1\p\g\4\l\n\8\q\0\5\l\5\s\i\i\y\4\z\s\p\f\s\0\r\w\i\v\e\u\c\7\f\8\l\s\2\h\f\h\p\c\9\y\w\u\x\w\c\q\d\9\v\1\i\7\l\e\i\a\1\j\b\5\g\j\n\8\h\f\g\g\8\w\9\1\3\9\y\h\2\3\b\v\y\v\3\2\f\4\5\k\n\9\x\s\d\m\d\6\5\z\e\n\i\2\q\h\n\o\6\1\q\b\b\h\5\8\6\1\m\0\e\i\b\q\y\d\3\i\1\f\j\x\8\8\s\7\v\q\g\2\j\3\0\n\v\6\b\v\t\v\y\m\6\q\l\w\w\k\p\v\5\o\8\2\m\6\a\8\w\k\u\5\t\x\i\p\z\t\1\v\m\5\t\9\o\p\o\x\r\u\j\h\1\0\t\2\0\2\w\u\2\5\s\8\m\j\r\9\n\1\q\k\e\h\a\n\j\q\l\4\s\c\y\7\5\0\k\5\m\n\n\d\m\t\5\x\r\v\f\7\e\d\i\w\u\f\v\k\9\y\u\3\u\x\b\4\f\a\l\r\x\w\2\h\u\e\4\s\w\e\t\u\o\w\g\r\f\k\5\t\w\d\o\y\1\b\v\b\z\z\g\2\e\z\9\r\7\6\h\h ]] 00:08:35.775 04:11:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.775 04:11:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:35.775 [2024-12-06 04:11:48.168012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.775 [2024-12-06 04:11:48.168110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:08:35.775 [2024-12-06 04:11:48.305232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.035 [2024-12-06 04:11:48.377801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.035  [2024-12-06T04:11:48.860Z] Copying: 512/512 [B] (average 500 kBps) 00:08:36.295 00:08:36.295 04:11:48 -- dd/posix.sh@93 -- # [[ ahjvjwupmki6c38qsaimo0q8izr9uhpar1t34krogsg4hp8157hok37qc4t9nusa5h29xuntl3mf5ibqmm973y0h6cxgz29uunsiwvqgnpdvctooeczgp0jkc6c3e0bxc8dfq998jb0tmb4kylb2nnc53mmh4dlb6rqdfdj4xkkp6ecubw0abb2gtia7wcdbrx28qtllj70s8qkv6k81pg4ln8q05l5siiy4zspfs0rwiveuc7f8ls2hfhpc9ywuxwcqd9v1i7leia1jb5gjn8hfgg8w9139yh23bvyv32f45kn9xsdmd65zeni2qhno61qbbh5861m0eibqyd3i1fjx88s7vqg2j30nv6bvtvym6qlwwkpv5o82m6a8wku5txipzt1vm5t9opoxrujh10t202wu25s8mjr9n1qkehanjql4scy750k5mnndmt5xrvf7ediwufvk9yu3uxb4falrxw2hue4swetuowgrfk5twdoy1bvbzzg2ez9r76hh == \a\h\j\v\j\w\u\p\m\k\i\6\c\3\8\q\s\a\i\m\o\0\q\8\i\z\r\9\u\h\p\a\r\1\t\3\4\k\r\o\g\s\g\4\h\p\8\1\5\7\h\o\k\3\7\q\c\4\t\9\n\u\s\a\5\h\2\9\x\u\n\t\l\3\m\f\5\i\b\q\m\m\9\7\3\y\0\h\6\c\x\g\z\2\9\u\u\n\s\i\w\v\q\g\n\p\d\v\c\t\o\o\e\c\z\g\p\0\j\k\c\6\c\3\e\0\b\x\c\8\d\f\q\9\9\8\j\b\0\t\m\b\4\k\y\l\b\2\n\n\c\5\3\m\m\h\4\d\l\b\6\r\q\d\f\d\j\4\x\k\k\p\6\e\c\u\b\w\0\a\b\b\2\g\t\i\a\7\w\c\d\b\r\x\2\8\q\t\l\l\j\7\0\s\8\q\k\v\6\k\8\1\p\g\4\l\n\8\q\0\5\l\5\s\i\i\y\4\z\s\p\f\s\0\r\w\i\v\e\u\c\7\f\8\l\s\2\h\f\h\p\c\9\y\w\u\x\w\c\q\d\9\v\1\i\7\l\e\i\a\1\j\b\5\g\j\n\8\h\f\g\g\8\w\9\1\3\9\y\h\2\3\b\v\y\v\3\2\f\4\5\k\n\9\x\s\d\m\d\6\5\z\e\n\i\2\q\h\n\o\6\1\q\b\b\h\5\8\6\1\m\0\e\i\b\q\y\d\3\i\1\f\j\x\8\8\s\7\v\q\g\2\j\3\0\n\v\6\b\v\t\v\y\m\6\q\l\w\w\k\p\v\5\o\8\2\m\6\a\8\w\k\u\5\t\x\i\p\z\t\1\v\m\5\t\9\o\p\o\x\r\u\j\h\1\0\t\2\0\2\w\u\2\5\s\8\m\j\r\9\n\1\q\k\e\h\a\n\j\q\l\4\s\c\y\7\5\0\k\5\m\n\n\d\m\t\5\x\r\v\f\7\e\d\i\w\u\f\v\k\9\y\u\3\u\x\b\4\f\a\l\r\x\w\2\h\u\e\4\s\w\e\t\u\o\w\g\r\f\k\5\t\w\d\o\y\1\b\v\b\z\z\g\2\e\z\9\r\7\6\h\h ]] 00:08:36.295 00:08:36.295 real 0m4.464s 00:08:36.295 user 0m2.313s 00:08:36.295 sys 0m1.165s 00:08:36.295 04:11:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.295 ************************************ 00:08:36.295 END TEST dd_flags_misc_forced_aio 00:08:36.295 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.295 ************************************ 00:08:36.295 04:11:48 -- dd/posix.sh@1 -- # cleanup 00:08:36.295 04:11:48 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:36.295 04:11:48 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:36.295 00:08:36.295 real 0m20.811s 00:08:36.295 user 0m9.877s 00:08:36.295 sys 0m5.095s 00:08:36.295 04:11:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.295 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.295 ************************************ 00:08:36.295 END TEST spdk_dd_posix 00:08:36.295 ************************************ 00:08:36.295 04:11:48 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:36.295 04:11:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.295 04:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.295 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.295 ************************************ 00:08:36.295 START TEST spdk_dd_malloc 00:08:36.295 ************************************ 00:08:36.295 04:11:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:36.295 * Looking for test storage... 00:08:36.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:36.295 04:11:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.295 04:11:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.295 04:11:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.555 04:11:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.555 04:11:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.555 04:11:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.555 04:11:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.555 04:11:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.555 04:11:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.555 04:11:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.555 04:11:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.555 04:11:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.555 04:11:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.555 04:11:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.555 04:11:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.555 04:11:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.555 04:11:48 -- scripts/common.sh@344 -- # : 1 00:08:36.555 04:11:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.555 04:11:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.555 04:11:48 -- scripts/common.sh@364 -- # decimal 1 00:08:36.555 04:11:48 -- scripts/common.sh@352 -- # local d=1 00:08:36.555 04:11:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.555 04:11:48 -- scripts/common.sh@354 -- # echo 1 00:08:36.555 04:11:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:36.555 04:11:48 -- scripts/common.sh@365 -- # decimal 2 00:08:36.555 04:11:48 -- scripts/common.sh@352 -- # local d=2 00:08:36.555 04:11:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.555 04:11:48 -- scripts/common.sh@354 -- # echo 2 00:08:36.555 04:11:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:36.556 04:11:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:36.556 04:11:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:36.556 04:11:48 -- scripts/common.sh@367 -- # return 0 00:08:36.556 04:11:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.556 04:11:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.556 --rc genhtml_branch_coverage=1 00:08:36.556 --rc genhtml_function_coverage=1 00:08:36.556 --rc genhtml_legend=1 00:08:36.556 --rc geninfo_all_blocks=1 00:08:36.556 --rc geninfo_unexecuted_blocks=1 00:08:36.556 00:08:36.556 ' 00:08:36.556 04:11:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.556 --rc genhtml_branch_coverage=1 00:08:36.556 --rc genhtml_function_coverage=1 00:08:36.556 --rc genhtml_legend=1 00:08:36.556 --rc geninfo_all_blocks=1 00:08:36.556 --rc geninfo_unexecuted_blocks=1 00:08:36.556 00:08:36.556 ' 00:08:36.556 04:11:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.556 --rc genhtml_branch_coverage=1 00:08:36.556 --rc genhtml_function_coverage=1 00:08:36.556 --rc genhtml_legend=1 00:08:36.556 --rc geninfo_all_blocks=1 00:08:36.556 --rc geninfo_unexecuted_blocks=1 00:08:36.556 00:08:36.556 ' 00:08:36.556 04:11:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:36.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.556 --rc genhtml_branch_coverage=1 00:08:36.556 --rc genhtml_function_coverage=1 00:08:36.556 --rc genhtml_legend=1 00:08:36.556 --rc geninfo_all_blocks=1 00:08:36.556 --rc geninfo_unexecuted_blocks=1 00:08:36.556 00:08:36.556 ' 00:08:36.556 04:11:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.556 04:11:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.556 04:11:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.556 04:11:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.556 04:11:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.556 04:11:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.556 04:11:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.556 04:11:48 -- paths/export.sh@5 -- # export PATH 00:08:36.556 04:11:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.556 04:11:48 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:36.556 04:11:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.556 04:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.556 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.556 ************************************ 00:08:36.556 START TEST dd_malloc_copy 00:08:36.556 ************************************ 00:08:36.556 04:11:48 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:36.556 04:11:48 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:36.556 04:11:48 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:36.556 04:11:48 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:36.556 04:11:48 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:36.556 04:11:48 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:36.556 04:11:48 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:36.556 04:11:48 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:36.556 04:11:48 -- dd/malloc.sh@28 -- # gen_conf 00:08:36.556 04:11:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:36.556 04:11:48 -- common/autotest_common.sh@10 -- # set +x 00:08:36.556 [2024-12-06 04:11:49.019976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.556 [2024-12-06 04:11:49.020076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70859 ] 00:08:36.556 { 00:08:36.556 "subsystems": [ 00:08:36.556 { 00:08:36.556 "subsystem": "bdev", 00:08:36.556 "config": [ 00:08:36.556 { 00:08:36.556 "params": { 00:08:36.556 "block_size": 512, 00:08:36.556 "num_blocks": 1048576, 00:08:36.556 "name": "malloc0" 00:08:36.556 }, 00:08:36.556 "method": "bdev_malloc_create" 00:08:36.556 }, 00:08:36.556 { 00:08:36.556 "params": { 00:08:36.556 "block_size": 512, 00:08:36.556 "num_blocks": 1048576, 00:08:36.556 "name": "malloc1" 00:08:36.556 }, 00:08:36.556 "method": "bdev_malloc_create" 00:08:36.556 }, 00:08:36.556 { 00:08:36.556 "method": "bdev_wait_for_examine" 00:08:36.556 } 00:08:36.556 ] 00:08:36.556 } 00:08:36.556 ] 00:08:36.556 } 00:08:36.815 [2024-12-06 04:11:49.158046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.815 [2024-12-06 04:11:49.234651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.194  [2024-12-06T04:11:51.697Z] Copying: 216/512 [MB] (216 MBps) [2024-12-06T04:11:52.265Z] Copying: 433/512 [MB] (216 MBps) [2024-12-06T04:11:52.833Z] Copying: 512/512 [MB] (average 216 MBps) 00:08:40.268 00:08:40.268 04:11:52 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:40.268 04:11:52 -- dd/malloc.sh@33 -- # gen_conf 00:08:40.268 04:11:52 -- dd/common.sh@31 -- # xtrace_disable 00:08:40.268 04:11:52 -- common/autotest_common.sh@10 -- # set +x 00:08:40.268 [2024-12-06 04:11:52.586896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.268 [2024-12-06 04:11:52.586998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70905 ] 00:08:40.268 { 00:08:40.268 "subsystems": [ 00:08:40.268 { 00:08:40.268 "subsystem": "bdev", 00:08:40.268 "config": [ 00:08:40.268 { 00:08:40.268 "params": { 00:08:40.268 "block_size": 512, 00:08:40.268 "num_blocks": 1048576, 00:08:40.268 "name": "malloc0" 00:08:40.268 }, 00:08:40.268 "method": "bdev_malloc_create" 00:08:40.268 }, 00:08:40.268 { 00:08:40.268 "params": { 00:08:40.268 "block_size": 512, 00:08:40.268 "num_blocks": 1048576, 00:08:40.268 "name": "malloc1" 00:08:40.268 }, 00:08:40.268 "method": "bdev_malloc_create" 00:08:40.268 }, 00:08:40.268 { 00:08:40.268 "method": "bdev_wait_for_examine" 00:08:40.268 } 00:08:40.268 ] 00:08:40.268 } 00:08:40.268 ] 00:08:40.268 } 00:08:40.268 [2024-12-06 04:11:52.716145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.268 [2024-12-06 04:11:52.783350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.646  [2024-12-06T04:11:55.603Z] Copying: 221/512 [MB] (221 MBps) [2024-12-06T04:11:55.603Z] Copying: 442/512 [MB] (221 MBps) [2024-12-06T04:11:56.169Z] Copying: 512/512 [MB] (average 220 MBps) 00:08:43.604 00:08:43.604 00:08:43.604 real 0m7.085s 00:08:43.604 user 0m6.097s 00:08:43.604 sys 0m0.835s 00:08:43.604 04:11:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.604 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.604 ************************************ 00:08:43.604 END TEST dd_malloc_copy 00:08:43.604 ************************************ 00:08:43.604 00:08:43.604 real 0m7.318s 00:08:43.604 user 0m6.222s 00:08:43.604 sys 0m0.951s 00:08:43.604 04:11:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.604 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.604 ************************************ 00:08:43.604 END TEST spdk_dd_malloc 00:08:43.604 ************************************ 00:08:43.604 04:11:56 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:43.604 04:11:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:43.604 04:11:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.604 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.604 ************************************ 00:08:43.604 START TEST spdk_dd_bdev_to_bdev 00:08:43.604 ************************************ 00:08:43.605 04:11:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:43.863 * Looking for test storage... 00:08:43.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:43.863 04:11:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:43.863 04:11:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:43.863 04:11:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:43.863 04:11:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:43.863 04:11:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:43.863 04:11:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:43.863 04:11:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:43.863 04:11:56 -- scripts/common.sh@335 -- # IFS=.-: 00:08:43.863 04:11:56 -- scripts/common.sh@335 -- # read -ra ver1 00:08:43.863 04:11:56 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.863 04:11:56 -- scripts/common.sh@336 -- # read -ra ver2 00:08:43.863 04:11:56 -- scripts/common.sh@337 -- # local 'op=<' 00:08:43.863 04:11:56 -- scripts/common.sh@339 -- # ver1_l=2 00:08:43.863 04:11:56 -- scripts/common.sh@340 -- # ver2_l=1 00:08:43.863 04:11:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:43.863 04:11:56 -- scripts/common.sh@343 -- # case "$op" in 00:08:43.863 04:11:56 -- scripts/common.sh@344 -- # : 1 00:08:43.863 04:11:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:43.863 04:11:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.863 04:11:56 -- scripts/common.sh@364 -- # decimal 1 00:08:43.863 04:11:56 -- scripts/common.sh@352 -- # local d=1 00:08:43.863 04:11:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.863 04:11:56 -- scripts/common.sh@354 -- # echo 1 00:08:43.863 04:11:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:43.863 04:11:56 -- scripts/common.sh@365 -- # decimal 2 00:08:43.863 04:11:56 -- scripts/common.sh@352 -- # local d=2 00:08:43.863 04:11:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.863 04:11:56 -- scripts/common.sh@354 -- # echo 2 00:08:43.863 04:11:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:43.863 04:11:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:43.863 04:11:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:43.863 04:11:56 -- scripts/common.sh@367 -- # return 0 00:08:43.863 04:11:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.863 04:11:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.863 --rc genhtml_branch_coverage=1 00:08:43.863 --rc genhtml_function_coverage=1 00:08:43.863 --rc genhtml_legend=1 00:08:43.863 --rc geninfo_all_blocks=1 00:08:43.863 --rc geninfo_unexecuted_blocks=1 00:08:43.863 00:08:43.863 ' 00:08:43.863 04:11:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.863 --rc genhtml_branch_coverage=1 00:08:43.863 --rc genhtml_function_coverage=1 00:08:43.863 --rc genhtml_legend=1 00:08:43.863 --rc geninfo_all_blocks=1 00:08:43.863 --rc geninfo_unexecuted_blocks=1 00:08:43.863 00:08:43.863 ' 00:08:43.863 04:11:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.863 --rc genhtml_branch_coverage=1 00:08:43.863 --rc genhtml_function_coverage=1 00:08:43.863 --rc genhtml_legend=1 00:08:43.863 --rc geninfo_all_blocks=1 00:08:43.863 --rc geninfo_unexecuted_blocks=1 00:08:43.863 00:08:43.863 ' 00:08:43.863 04:11:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:43.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.863 --rc genhtml_branch_coverage=1 00:08:43.863 --rc genhtml_function_coverage=1 00:08:43.863 --rc genhtml_legend=1 00:08:43.863 --rc geninfo_all_blocks=1 00:08:43.863 --rc geninfo_unexecuted_blocks=1 00:08:43.863 00:08:43.863 ' 00:08:43.863 04:11:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.863 04:11:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.863 04:11:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.863 04:11:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.863 04:11:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.863 04:11:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.864 04:11:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.864 04:11:56 -- paths/export.sh@5 -- # export PATH 00:08:43.864 04:11:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:43.864 04:11:56 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:43.864 04:11:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:43.864 04:11:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.864 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:43.864 ************************************ 00:08:43.864 START TEST dd_inflate_file 00:08:43.864 ************************************ 00:08:43.864 04:11:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:43.864 [2024-12-06 04:11:56.388601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.864 [2024-12-06 04:11:56.388875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71018 ] 00:08:44.121 [2024-12-06 04:11:56.522575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.121 [2024-12-06 04:11:56.604322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.380  [2024-12-06T04:11:56.945Z] Copying: 64/64 [MB] (average 1729 MBps) 00:08:44.380 00:08:44.380 00:08:44.380 real 0m0.592s 00:08:44.380 user 0m0.282s 00:08:44.380 sys 0m0.192s 00:08:44.380 04:11:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.380 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:44.380 ************************************ 00:08:44.380 END TEST dd_inflate_file 00:08:44.380 ************************************ 00:08:44.638 04:11:56 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:44.639 04:11:56 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:44.639 04:11:56 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:44.639 04:11:56 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:44.639 04:11:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:44.639 04:11:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.639 04:11:56 -- dd/common.sh@31 -- # xtrace_disable 00:08:44.639 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 04:11:56 -- common/autotest_common.sh@10 -- # set +x 00:08:44.639 ************************************ 00:08:44.639 START TEST dd_copy_to_out_bdev 00:08:44.639 ************************************ 00:08:44.639 04:11:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:44.639 [2024-12-06 04:11:57.043823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.639 [2024-12-06 04:11:57.043927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:08:44.639 { 00:08:44.639 "subsystems": [ 00:08:44.639 { 00:08:44.639 "subsystem": "bdev", 00:08:44.639 "config": [ 00:08:44.639 { 00:08:44.639 "params": { 00:08:44.639 "trtype": "pcie", 00:08:44.639 "traddr": "0000:00:06.0", 00:08:44.639 "name": "Nvme0" 00:08:44.639 }, 00:08:44.639 "method": "bdev_nvme_attach_controller" 00:08:44.639 }, 00:08:44.639 { 00:08:44.639 "params": { 00:08:44.639 "trtype": "pcie", 00:08:44.639 "traddr": "0000:00:07.0", 00:08:44.639 "name": "Nvme1" 00:08:44.639 }, 00:08:44.639 "method": "bdev_nvme_attach_controller" 00:08:44.639 }, 00:08:44.639 { 00:08:44.639 "method": "bdev_wait_for_examine" 00:08:44.639 } 00:08:44.639 ] 00:08:44.639 } 00:08:44.639 ] 00:08:44.639 } 00:08:44.639 [2024-12-06 04:11:57.184494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.898 [2024-12-06 04:11:57.264239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.276  [2024-12-06T04:11:58.841Z] Copying: 52/64 [MB] (52 MBps) [2024-12-06T04:11:59.099Z] Copying: 64/64 [MB] (average 52 MBps) 00:08:46.534 00:08:46.534 ************************************ 00:08:46.534 END TEST dd_copy_to_out_bdev 00:08:46.534 ************************************ 00:08:46.534 00:08:46.534 real 0m1.974s 00:08:46.534 user 0m1.692s 00:08:46.534 sys 0m0.219s 00:08:46.534 04:11:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.534 04:11:58 -- common/autotest_common.sh@10 -- # set +x 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:46.534 04:11:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.534 04:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.534 04:11:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.534 ************************************ 00:08:46.534 START TEST dd_offset_magic 00:08:46.534 ************************************ 00:08:46.534 04:11:59 -- common/autotest_common.sh@1114 -- # offset_magic 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:46.534 04:11:59 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:46.534 04:11:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.534 04:11:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.534 [2024-12-06 04:11:59.062001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.534 [2024-12-06 04:11:59.062075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71099 ] 00:08:46.534 { 00:08:46.534 "subsystems": [ 00:08:46.534 { 00:08:46.534 "subsystem": "bdev", 00:08:46.534 "config": [ 00:08:46.534 { 00:08:46.534 "params": { 00:08:46.534 "trtype": "pcie", 00:08:46.534 "traddr": "0000:00:06.0", 00:08:46.534 "name": "Nvme0" 00:08:46.534 }, 00:08:46.534 "method": "bdev_nvme_attach_controller" 00:08:46.534 }, 00:08:46.534 { 00:08:46.534 "params": { 00:08:46.534 "trtype": "pcie", 00:08:46.534 "traddr": "0000:00:07.0", 00:08:46.534 "name": "Nvme1" 00:08:46.534 }, 00:08:46.534 "method": "bdev_nvme_attach_controller" 00:08:46.534 }, 00:08:46.534 { 00:08:46.534 "method": "bdev_wait_for_examine" 00:08:46.534 } 00:08:46.534 ] 00:08:46.534 } 00:08:46.534 ] 00:08:46.534 } 00:08:46.793 [2024-12-06 04:11:59.193778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.793 [2024-12-06 04:11:59.266756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.052  [2024-12-06T04:11:59.889Z] Copying: 65/65 [MB] (average 902 MBps) 00:08:47.324 00:08:47.324 04:11:59 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:47.324 04:11:59 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:47.324 04:11:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:47.324 04:11:59 -- common/autotest_common.sh@10 -- # set +x 00:08:47.324 [2024-12-06 04:11:59.841389] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.324 [2024-12-06 04:11:59.841492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:08:47.324 { 00:08:47.324 "subsystems": [ 00:08:47.324 { 00:08:47.324 "subsystem": "bdev", 00:08:47.324 "config": [ 00:08:47.324 { 00:08:47.324 "params": { 00:08:47.324 "trtype": "pcie", 00:08:47.324 "traddr": "0000:00:06.0", 00:08:47.324 "name": "Nvme0" 00:08:47.324 }, 00:08:47.324 "method": "bdev_nvme_attach_controller" 00:08:47.324 }, 00:08:47.324 { 00:08:47.324 "params": { 00:08:47.324 "trtype": "pcie", 00:08:47.324 "traddr": "0000:00:07.0", 00:08:47.324 "name": "Nvme1" 00:08:47.324 }, 00:08:47.324 "method": "bdev_nvme_attach_controller" 00:08:47.324 }, 00:08:47.324 { 00:08:47.324 "method": "bdev_wait_for_examine" 00:08:47.324 } 00:08:47.324 ] 00:08:47.324 } 00:08:47.324 ] 00:08:47.324 } 00:08:47.584 [2024-12-06 04:11:59.975116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.584 [2024-12-06 04:12:00.061380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.843  [2024-12-06T04:12:00.668Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:48.103 00:08:48.103 04:12:00 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:48.103 04:12:00 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:48.103 04:12:00 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:48.103 04:12:00 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:48.103 04:12:00 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:48.103 04:12:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:48.103 04:12:00 -- common/autotest_common.sh@10 -- # set +x 00:08:48.103 [2024-12-06 04:12:00.551673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.103 [2024-12-06 04:12:00.551788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71128 ] 00:08:48.103 { 00:08:48.103 "subsystems": [ 00:08:48.103 { 00:08:48.103 "subsystem": "bdev", 00:08:48.103 "config": [ 00:08:48.103 { 00:08:48.103 "params": { 00:08:48.103 "trtype": "pcie", 00:08:48.103 "traddr": "0000:00:06.0", 00:08:48.103 "name": "Nvme0" 00:08:48.103 }, 00:08:48.103 "method": "bdev_nvme_attach_controller" 00:08:48.103 }, 00:08:48.103 { 00:08:48.103 "params": { 00:08:48.103 "trtype": "pcie", 00:08:48.103 "traddr": "0000:00:07.0", 00:08:48.103 "name": "Nvme1" 00:08:48.103 }, 00:08:48.103 "method": "bdev_nvme_attach_controller" 00:08:48.103 }, 00:08:48.103 { 00:08:48.103 "method": "bdev_wait_for_examine" 00:08:48.103 } 00:08:48.103 ] 00:08:48.103 } 00:08:48.103 ] 00:08:48.103 } 00:08:48.362 [2024-12-06 04:12:00.691148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.362 [2024-12-06 04:12:00.753382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.621  [2024-12-06T04:12:01.445Z] Copying: 65/65 [MB] (average 955 MBps) 00:08:48.880 00:08:48.880 04:12:01 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:48.880 04:12:01 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:48.880 04:12:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:48.880 04:12:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.880 { 00:08:48.880 "subsystems": [ 00:08:48.880 { 00:08:48.880 "subsystem": "bdev", 00:08:48.880 "config": [ 00:08:48.880 { 00:08:48.880 "params": { 00:08:48.880 "trtype": "pcie", 00:08:48.880 "traddr": "0000:00:06.0", 00:08:48.880 "name": "Nvme0" 00:08:48.880 }, 00:08:48.880 "method": "bdev_nvme_attach_controller" 00:08:48.880 }, 00:08:48.880 { 00:08:48.880 "params": { 00:08:48.880 "trtype": "pcie", 00:08:48.880 "traddr": "0000:00:07.0", 00:08:48.880 "name": "Nvme1" 00:08:48.880 }, 00:08:48.880 "method": "bdev_nvme_attach_controller" 00:08:48.880 }, 00:08:48.880 { 00:08:48.880 "method": "bdev_wait_for_examine" 00:08:48.880 } 00:08:48.880 ] 00:08:48.880 } 00:08:48.880 ] 00:08:48.880 } 00:08:48.880 [2024-12-06 04:12:01.341916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.880 [2024-12-06 04:12:01.342221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:08:49.141 [2024-12-06 04:12:01.478864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.141 [2024-12-06 04:12:01.553352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.411  [2024-12-06T04:12:02.243Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:49.678 00:08:49.678 04:12:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:49.678 04:12:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:49.678 00:08:49.678 real 0m2.983s 00:08:49.678 user 0m2.100s 00:08:49.678 sys 0m0.671s 00:08:49.678 04:12:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.678 04:12:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.678 ************************************ 00:08:49.678 END TEST dd_offset_magic 00:08:49.678 ************************************ 00:08:49.678 04:12:02 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:49.678 04:12:02 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:49.678 04:12:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:49.678 04:12:02 -- dd/common.sh@11 -- # local nvme_ref= 00:08:49.678 04:12:02 -- dd/common.sh@12 -- # local size=4194330 00:08:49.678 04:12:02 -- dd/common.sh@14 -- # local bs=1048576 00:08:49.678 04:12:02 -- dd/common.sh@15 -- # local count=5 00:08:49.678 04:12:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:49.678 04:12:02 -- dd/common.sh@18 -- # gen_conf 00:08:49.678 04:12:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:49.678 04:12:02 -- common/autotest_common.sh@10 -- # set +x 00:08:49.678 [2024-12-06 04:12:02.092484] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.678 [2024-12-06 04:12:02.092773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:08:49.678 { 00:08:49.678 "subsystems": [ 00:08:49.678 { 00:08:49.678 "subsystem": "bdev", 00:08:49.678 "config": [ 00:08:49.678 { 00:08:49.678 "params": { 00:08:49.678 "trtype": "pcie", 00:08:49.678 "traddr": "0000:00:06.0", 00:08:49.678 "name": "Nvme0" 00:08:49.678 }, 00:08:49.678 "method": "bdev_nvme_attach_controller" 00:08:49.678 }, 00:08:49.678 { 00:08:49.678 "params": { 00:08:49.678 "trtype": "pcie", 00:08:49.678 "traddr": "0000:00:07.0", 00:08:49.678 "name": "Nvme1" 00:08:49.678 }, 00:08:49.678 "method": "bdev_nvme_attach_controller" 00:08:49.678 }, 00:08:49.678 { 00:08:49.678 "method": "bdev_wait_for_examine" 00:08:49.678 } 00:08:49.678 ] 00:08:49.678 } 00:08:49.678 ] 00:08:49.678 } 00:08:49.678 [2024-12-06 04:12:02.231867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.937 [2024-12-06 04:12:02.297573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.197  [2024-12-06T04:12:02.762Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:50.197 00:08:50.197 04:12:02 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:50.197 04:12:02 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:50.197 04:12:02 -- dd/common.sh@11 -- # local nvme_ref= 00:08:50.197 04:12:02 -- dd/common.sh@12 -- # local size=4194330 00:08:50.197 04:12:02 -- dd/common.sh@14 -- # local bs=1048576 00:08:50.197 04:12:02 -- dd/common.sh@15 -- # local count=5 00:08:50.197 04:12:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:50.197 04:12:02 -- dd/common.sh@18 -- # gen_conf 00:08:50.197 04:12:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:50.197 04:12:02 -- common/autotest_common.sh@10 -- # set +x 00:08:50.456 [2024-12-06 04:12:02.794036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.456 [2024-12-06 04:12:02.794364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71192 ] 00:08:50.456 { 00:08:50.456 "subsystems": [ 00:08:50.456 { 00:08:50.456 "subsystem": "bdev", 00:08:50.456 "config": [ 00:08:50.456 { 00:08:50.456 "params": { 00:08:50.456 "trtype": "pcie", 00:08:50.456 "traddr": "0000:00:06.0", 00:08:50.456 "name": "Nvme0" 00:08:50.456 }, 00:08:50.456 "method": "bdev_nvme_attach_controller" 00:08:50.456 }, 00:08:50.456 { 00:08:50.456 "params": { 00:08:50.456 "trtype": "pcie", 00:08:50.456 "traddr": "0000:00:07.0", 00:08:50.456 "name": "Nvme1" 00:08:50.456 }, 00:08:50.456 "method": "bdev_nvme_attach_controller" 00:08:50.456 }, 00:08:50.456 { 00:08:50.456 "method": "bdev_wait_for_examine" 00:08:50.456 } 00:08:50.456 ] 00:08:50.456 } 00:08:50.456 ] 00:08:50.456 } 00:08:50.456 [2024-12-06 04:12:02.933694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.456 [2024-12-06 04:12:03.000018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.716  [2024-12-06T04:12:03.540Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:50.975 00:08:50.975 04:12:03 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:50.975 ************************************ 00:08:50.975 END TEST spdk_dd_bdev_to_bdev 00:08:50.975 ************************************ 00:08:50.975 00:08:50.975 real 0m7.293s 00:08:50.975 user 0m5.189s 00:08:50.975 sys 0m1.601s 00:08:50.975 04:12:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.975 04:12:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.975 04:12:03 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:50.975 04:12:03 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:50.975 04:12:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.975 04:12:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.975 04:12:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.975 ************************************ 00:08:50.975 START TEST spdk_dd_uring 00:08:50.975 ************************************ 00:08:50.975 04:12:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:51.236 * Looking for test storage... 00:08:51.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:51.236 04:12:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:51.236 04:12:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:51.236 04:12:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:51.236 04:12:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:51.236 04:12:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:51.236 04:12:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:51.236 04:12:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:51.236 04:12:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:51.236 04:12:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.236 04:12:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:51.236 04:12:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:51.236 04:12:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:51.236 04:12:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:51.236 04:12:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:51.236 04:12:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:51.236 04:12:03 -- scripts/common.sh@344 -- # : 1 00:08:51.236 04:12:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:51.236 04:12:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.236 04:12:03 -- scripts/common.sh@364 -- # decimal 1 00:08:51.236 04:12:03 -- scripts/common.sh@352 -- # local d=1 00:08:51.236 04:12:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.236 04:12:03 -- scripts/common.sh@354 -- # echo 1 00:08:51.236 04:12:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:51.236 04:12:03 -- scripts/common.sh@365 -- # decimal 2 00:08:51.236 04:12:03 -- scripts/common.sh@352 -- # local d=2 00:08:51.236 04:12:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.236 04:12:03 -- scripts/common.sh@354 -- # echo 2 00:08:51.236 04:12:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:51.236 04:12:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:51.236 04:12:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:51.236 04:12:03 -- scripts/common.sh@367 -- # return 0 00:08:51.236 04:12:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:51.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.236 --rc genhtml_branch_coverage=1 00:08:51.236 --rc genhtml_function_coverage=1 00:08:51.236 --rc genhtml_legend=1 00:08:51.236 --rc geninfo_all_blocks=1 00:08:51.236 --rc geninfo_unexecuted_blocks=1 00:08:51.236 00:08:51.236 ' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:51.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.236 --rc genhtml_branch_coverage=1 00:08:51.236 --rc genhtml_function_coverage=1 00:08:51.236 --rc genhtml_legend=1 00:08:51.236 --rc geninfo_all_blocks=1 00:08:51.236 --rc geninfo_unexecuted_blocks=1 00:08:51.236 00:08:51.236 ' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:51.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.236 --rc genhtml_branch_coverage=1 00:08:51.236 --rc genhtml_function_coverage=1 00:08:51.236 --rc genhtml_legend=1 00:08:51.236 --rc geninfo_all_blocks=1 00:08:51.236 --rc geninfo_unexecuted_blocks=1 00:08:51.236 00:08:51.236 ' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:51.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.236 --rc genhtml_branch_coverage=1 00:08:51.236 --rc genhtml_function_coverage=1 00:08:51.236 --rc genhtml_legend=1 00:08:51.236 --rc geninfo_all_blocks=1 00:08:51.236 --rc geninfo_unexecuted_blocks=1 00:08:51.236 00:08:51.236 ' 00:08:51.236 04:12:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.236 04:12:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.236 04:12:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.236 04:12:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.236 04:12:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.236 04:12:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.236 04:12:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.236 04:12:03 -- paths/export.sh@5 -- # export PATH 00:08:51.236 04:12:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.236 04:12:03 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:51.236 04:12:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.236 04:12:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.236 04:12:03 -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 ************************************ 00:08:51.237 START TEST dd_uring_copy 00:08:51.237 ************************************ 00:08:51.237 04:12:03 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:08:51.237 04:12:03 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:51.237 04:12:03 -- dd/uring.sh@16 -- # local magic 00:08:51.237 04:12:03 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:51.237 04:12:03 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:51.237 04:12:03 -- dd/uring.sh@19 -- # local verify_magic 00:08:51.237 04:12:03 -- dd/uring.sh@21 -- # init_zram 00:08:51.237 04:12:03 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:51.237 04:12:03 -- dd/common.sh@164 -- # return 00:08:51.237 04:12:03 -- dd/uring.sh@22 -- # create_zram_dev 00:08:51.237 04:12:03 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:51.237 04:12:03 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:51.237 04:12:03 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:51.237 04:12:03 -- dd/common.sh@181 -- # local id=1 00:08:51.237 04:12:03 -- dd/common.sh@182 -- # local size=512M 00:08:51.237 04:12:03 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:51.237 04:12:03 -- dd/common.sh@186 -- # echo 512M 00:08:51.237 04:12:03 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:51.237 04:12:03 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:51.237 04:12:03 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:51.237 04:12:03 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:51.237 04:12:03 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:51.237 04:12:03 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:51.237 04:12:03 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:51.237 04:12:03 -- dd/common.sh@98 -- # xtrace_disable 00:08:51.237 04:12:03 -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 04:12:03 -- dd/uring.sh@41 -- # magic=z06vucg55g6pm4mww8vgok6ywj46jxwvqofpubjnvsc6mbxkxi0tlnuwrpl2w4pha96gy1qql4zdwbbkhh5rfp7mzrdvnalb0o6plx3qd4wiql3yec629rnpthofu2b0sozx680u5oyvstka1q1l9zxem3wo3ullrjoe4isx6vxpfkiegd8oqb3m1m6fg5dsx8t85lhjrp1oaxw6nctipbfb77a2fn4oxw7fgwygeus4zylbg093halrbkejqcatezrepbiwqrox7oahcuwhpw68o0l2a8pfr6z172ez8rgak0n64yclfsa2he9zrmizs7jlcplczdgjq4km7cb1pe0p60amp1trm8cbff04me5z9qmo9vb0hq9odh1o1zm0ljupsa7xnay7anflwpuned0czvcc89nbmmiozmipjxvldqjcjcuu45ya6l4x0yht2rjopckchercyvdvzx1vuawkkovnyb1k0x6gb164b33famt2eh1u9e7rppwuy2z382b2fxoepqzzwempsozffuybu4s7sdf19db9b96ecq3ljoht09au1t3gdp74lg6fsrucxwlwmrkr5ce6vrnoe5z6y2mm9we3pbwi9ylkzpd5gg6b39n64ed69k8hv84qz4ggfglj5gcq3kbcwevuomqt2cwf0af2k4pmqewhccxxp05f3cig7hvixjqyd26mhwsht3liwh5ejm90780yaye2jf5x346ukglq6eco5f3kyujg8b56u35zg42am0jz59beis4z6iis9t5ej7o2s6042t7ug432olg9ymatu8wvg4cee1vhnqp575kij0prj1zantsjbpc3ldp4savpzkob1dnno96kymk4nhiczrvddspygrip5m4i3oh96tt9uffh2vmj9t656nx52wuwkmyx14jiplpp51dcaoumwy4wki8krlse6dgtihfyte6zyhblh7c8l0lamnhxphb1riuf2c42kooxv21x9o7pxjvy25sw76e3hk6s1pjnxbv0 00:08:51.237 04:12:03 -- dd/uring.sh@42 -- # echo z06vucg55g6pm4mww8vgok6ywj46jxwvqofpubjnvsc6mbxkxi0tlnuwrpl2w4pha96gy1qql4zdwbbkhh5rfp7mzrdvnalb0o6plx3qd4wiql3yec629rnpthofu2b0sozx680u5oyvstka1q1l9zxem3wo3ullrjoe4isx6vxpfkiegd8oqb3m1m6fg5dsx8t85lhjrp1oaxw6nctipbfb77a2fn4oxw7fgwygeus4zylbg093halrbkejqcatezrepbiwqrox7oahcuwhpw68o0l2a8pfr6z172ez8rgak0n64yclfsa2he9zrmizs7jlcplczdgjq4km7cb1pe0p60amp1trm8cbff04me5z9qmo9vb0hq9odh1o1zm0ljupsa7xnay7anflwpuned0czvcc89nbmmiozmipjxvldqjcjcuu45ya6l4x0yht2rjopckchercyvdvzx1vuawkkovnyb1k0x6gb164b33famt2eh1u9e7rppwuy2z382b2fxoepqzzwempsozffuybu4s7sdf19db9b96ecq3ljoht09au1t3gdp74lg6fsrucxwlwmrkr5ce6vrnoe5z6y2mm9we3pbwi9ylkzpd5gg6b39n64ed69k8hv84qz4ggfglj5gcq3kbcwevuomqt2cwf0af2k4pmqewhccxxp05f3cig7hvixjqyd26mhwsht3liwh5ejm90780yaye2jf5x346ukglq6eco5f3kyujg8b56u35zg42am0jz59beis4z6iis9t5ej7o2s6042t7ug432olg9ymatu8wvg4cee1vhnqp575kij0prj1zantsjbpc3ldp4savpzkob1dnno96kymk4nhiczrvddspygrip5m4i3oh96tt9uffh2vmj9t656nx52wuwkmyx14jiplpp51dcaoumwy4wki8krlse6dgtihfyte6zyhblh7c8l0lamnhxphb1riuf2c42kooxv21x9o7pxjvy25sw76e3hk6s1pjnxbv0 00:08:51.237 04:12:03 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:51.237 [2024-12-06 04:12:03.771622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.237 [2024-12-06 04:12:03.771728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71268 ] 00:08:51.496 [2024-12-06 04:12:03.904820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.496 [2024-12-06 04:12:03.984377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.065  [2024-12-06T04:12:05.199Z] Copying: 511/511 [MB] (average 1505 MBps) 00:08:52.634 00:08:52.634 04:12:04 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:52.634 04:12:04 -- dd/uring.sh@54 -- # gen_conf 00:08:52.634 04:12:04 -- dd/common.sh@31 -- # xtrace_disable 00:08:52.634 04:12:04 -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 [2024-12-06 04:12:05.026882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.634 [2024-12-06 04:12:05.026998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:08:52.634 { 00:08:52.634 "subsystems": [ 00:08:52.634 { 00:08:52.634 "subsystem": "bdev", 00:08:52.634 "config": [ 00:08:52.634 { 00:08:52.634 "params": { 00:08:52.634 "block_size": 512, 00:08:52.634 "num_blocks": 1048576, 00:08:52.634 "name": "malloc0" 00:08:52.634 }, 00:08:52.634 "method": "bdev_malloc_create" 00:08:52.634 }, 00:08:52.634 { 00:08:52.634 "params": { 00:08:52.634 "filename": "/dev/zram1", 00:08:52.634 "name": "uring0" 00:08:52.634 }, 00:08:52.634 "method": "bdev_uring_create" 00:08:52.634 }, 00:08:52.634 { 00:08:52.634 "method": "bdev_wait_for_examine" 00:08:52.634 } 00:08:52.634 ] 00:08:52.634 } 00:08:52.634 ] 00:08:52.634 } 00:08:52.634 [2024-12-06 04:12:05.167527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.892 [2024-12-06 04:12:05.250145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.270  [2024-12-06T04:12:07.773Z] Copying: 198/512 [MB] (198 MBps) [2024-12-06T04:12:08.033Z] Copying: 407/512 [MB] (209 MBps) [2024-12-06T04:12:08.602Z] Copying: 512/512 [MB] (average 205 MBps) 00:08:56.037 00:08:56.037 04:12:08 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:56.037 04:12:08 -- dd/uring.sh@60 -- # gen_conf 00:08:56.037 04:12:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:56.037 04:12:08 -- common/autotest_common.sh@10 -- # set +x 00:08:56.037 [2024-12-06 04:12:08.453878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:56.037 [2024-12-06 04:12:08.454028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71331 ] 00:08:56.037 { 00:08:56.037 "subsystems": [ 00:08:56.037 { 00:08:56.037 "subsystem": "bdev", 00:08:56.037 "config": [ 00:08:56.037 { 00:08:56.037 "params": { 00:08:56.037 "block_size": 512, 00:08:56.037 "num_blocks": 1048576, 00:08:56.037 "name": "malloc0" 00:08:56.037 }, 00:08:56.037 "method": "bdev_malloc_create" 00:08:56.037 }, 00:08:56.037 { 00:08:56.037 "params": { 00:08:56.037 "filename": "/dev/zram1", 00:08:56.037 "name": "uring0" 00:08:56.037 }, 00:08:56.037 "method": "bdev_uring_create" 00:08:56.037 }, 00:08:56.037 { 00:08:56.037 "method": "bdev_wait_for_examine" 00:08:56.037 } 00:08:56.037 ] 00:08:56.037 } 00:08:56.037 ] 00:08:56.037 } 00:08:56.037 [2024-12-06 04:12:08.595302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.296 [2024-12-06 04:12:08.671003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.674  [2024-12-06T04:12:11.176Z] Copying: 134/512 [MB] (134 MBps) [2024-12-06T04:12:12.113Z] Copying: 259/512 [MB] (125 MBps) [2024-12-06T04:12:13.063Z] Copying: 390/512 [MB] (130 MBps) [2024-12-06T04:12:13.386Z] Copying: 512/512 [MB] (average 129 MBps) 00:09:00.821 00:09:00.821 04:12:13 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:00.821 04:12:13 -- dd/uring.sh@66 -- # [[ z06vucg55g6pm4mww8vgok6ywj46jxwvqofpubjnvsc6mbxkxi0tlnuwrpl2w4pha96gy1qql4zdwbbkhh5rfp7mzrdvnalb0o6plx3qd4wiql3yec629rnpthofu2b0sozx680u5oyvstka1q1l9zxem3wo3ullrjoe4isx6vxpfkiegd8oqb3m1m6fg5dsx8t85lhjrp1oaxw6nctipbfb77a2fn4oxw7fgwygeus4zylbg093halrbkejqcatezrepbiwqrox7oahcuwhpw68o0l2a8pfr6z172ez8rgak0n64yclfsa2he9zrmizs7jlcplczdgjq4km7cb1pe0p60amp1trm8cbff04me5z9qmo9vb0hq9odh1o1zm0ljupsa7xnay7anflwpuned0czvcc89nbmmiozmipjxvldqjcjcuu45ya6l4x0yht2rjopckchercyvdvzx1vuawkkovnyb1k0x6gb164b33famt2eh1u9e7rppwuy2z382b2fxoepqzzwempsozffuybu4s7sdf19db9b96ecq3ljoht09au1t3gdp74lg6fsrucxwlwmrkr5ce6vrnoe5z6y2mm9we3pbwi9ylkzpd5gg6b39n64ed69k8hv84qz4ggfglj5gcq3kbcwevuomqt2cwf0af2k4pmqewhccxxp05f3cig7hvixjqyd26mhwsht3liwh5ejm90780yaye2jf5x346ukglq6eco5f3kyujg8b56u35zg42am0jz59beis4z6iis9t5ej7o2s6042t7ug432olg9ymatu8wvg4cee1vhnqp575kij0prj1zantsjbpc3ldp4savpzkob1dnno96kymk4nhiczrvddspygrip5m4i3oh96tt9uffh2vmj9t656nx52wuwkmyx14jiplpp51dcaoumwy4wki8krlse6dgtihfyte6zyhblh7c8l0lamnhxphb1riuf2c42kooxv21x9o7pxjvy25sw76e3hk6s1pjnxbv0 == \z\0\6\v\u\c\g\5\5\g\6\p\m\4\m\w\w\8\v\g\o\k\6\y\w\j\4\6\j\x\w\v\q\o\f\p\u\b\j\n\v\s\c\6\m\b\x\k\x\i\0\t\l\n\u\w\r\p\l\2\w\4\p\h\a\9\6\g\y\1\q\q\l\4\z\d\w\b\b\k\h\h\5\r\f\p\7\m\z\r\d\v\n\a\l\b\0\o\6\p\l\x\3\q\d\4\w\i\q\l\3\y\e\c\6\2\9\r\n\p\t\h\o\f\u\2\b\0\s\o\z\x\6\8\0\u\5\o\y\v\s\t\k\a\1\q\1\l\9\z\x\e\m\3\w\o\3\u\l\l\r\j\o\e\4\i\s\x\6\v\x\p\f\k\i\e\g\d\8\o\q\b\3\m\1\m\6\f\g\5\d\s\x\8\t\8\5\l\h\j\r\p\1\o\a\x\w\6\n\c\t\i\p\b\f\b\7\7\a\2\f\n\4\o\x\w\7\f\g\w\y\g\e\u\s\4\z\y\l\b\g\0\9\3\h\a\l\r\b\k\e\j\q\c\a\t\e\z\r\e\p\b\i\w\q\r\o\x\7\o\a\h\c\u\w\h\p\w\6\8\o\0\l\2\a\8\p\f\r\6\z\1\7\2\e\z\8\r\g\a\k\0\n\6\4\y\c\l\f\s\a\2\h\e\9\z\r\m\i\z\s\7\j\l\c\p\l\c\z\d\g\j\q\4\k\m\7\c\b\1\p\e\0\p\6\0\a\m\p\1\t\r\m\8\c\b\f\f\0\4\m\e\5\z\9\q\m\o\9\v\b\0\h\q\9\o\d\h\1\o\1\z\m\0\l\j\u\p\s\a\7\x\n\a\y\7\a\n\f\l\w\p\u\n\e\d\0\c\z\v\c\c\8\9\n\b\m\m\i\o\z\m\i\p\j\x\v\l\d\q\j\c\j\c\u\u\4\5\y\a\6\l\4\x\0\y\h\t\2\r\j\o\p\c\k\c\h\e\r\c\y\v\d\v\z\x\1\v\u\a\w\k\k\o\v\n\y\b\1\k\0\x\6\g\b\1\6\4\b\3\3\f\a\m\t\2\e\h\1\u\9\e\7\r\p\p\w\u\y\2\z\3\8\2\b\2\f\x\o\e\p\q\z\z\w\e\m\p\s\o\z\f\f\u\y\b\u\4\s\7\s\d\f\1\9\d\b\9\b\9\6\e\c\q\3\l\j\o\h\t\0\9\a\u\1\t\3\g\d\p\7\4\l\g\6\f\s\r\u\c\x\w\l\w\m\r\k\r\5\c\e\6\v\r\n\o\e\5\z\6\y\2\m\m\9\w\e\3\p\b\w\i\9\y\l\k\z\p\d\5\g\g\6\b\3\9\n\6\4\e\d\6\9\k\8\h\v\8\4\q\z\4\g\g\f\g\l\j\5\g\c\q\3\k\b\c\w\e\v\u\o\m\q\t\2\c\w\f\0\a\f\2\k\4\p\m\q\e\w\h\c\c\x\x\p\0\5\f\3\c\i\g\7\h\v\i\x\j\q\y\d\2\6\m\h\w\s\h\t\3\l\i\w\h\5\e\j\m\9\0\7\8\0\y\a\y\e\2\j\f\5\x\3\4\6\u\k\g\l\q\6\e\c\o\5\f\3\k\y\u\j\g\8\b\5\6\u\3\5\z\g\4\2\a\m\0\j\z\5\9\b\e\i\s\4\z\6\i\i\s\9\t\5\e\j\7\o\2\s\6\0\4\2\t\7\u\g\4\3\2\o\l\g\9\y\m\a\t\u\8\w\v\g\4\c\e\e\1\v\h\n\q\p\5\7\5\k\i\j\0\p\r\j\1\z\a\n\t\s\j\b\p\c\3\l\d\p\4\s\a\v\p\z\k\o\b\1\d\n\n\o\9\6\k\y\m\k\4\n\h\i\c\z\r\v\d\d\s\p\y\g\r\i\p\5\m\4\i\3\o\h\9\6\t\t\9\u\f\f\h\2\v\m\j\9\t\6\5\6\n\x\5\2\w\u\w\k\m\y\x\1\4\j\i\p\l\p\p\5\1\d\c\a\o\u\m\w\y\4\w\k\i\8\k\r\l\s\e\6\d\g\t\i\h\f\y\t\e\6\z\y\h\b\l\h\7\c\8\l\0\l\a\m\n\h\x\p\h\b\1\r\i\u\f\2\c\4\2\k\o\o\x\v\2\1\x\9\o\7\p\x\j\v\y\2\5\s\w\7\6\e\3\h\k\6\s\1\p\j\n\x\b\v\0 ]] 00:09:00.821 04:12:13 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:00.821 04:12:13 -- dd/uring.sh@69 -- # [[ z06vucg55g6pm4mww8vgok6ywj46jxwvqofpubjnvsc6mbxkxi0tlnuwrpl2w4pha96gy1qql4zdwbbkhh5rfp7mzrdvnalb0o6plx3qd4wiql3yec629rnpthofu2b0sozx680u5oyvstka1q1l9zxem3wo3ullrjoe4isx6vxpfkiegd8oqb3m1m6fg5dsx8t85lhjrp1oaxw6nctipbfb77a2fn4oxw7fgwygeus4zylbg093halrbkejqcatezrepbiwqrox7oahcuwhpw68o0l2a8pfr6z172ez8rgak0n64yclfsa2he9zrmizs7jlcplczdgjq4km7cb1pe0p60amp1trm8cbff04me5z9qmo9vb0hq9odh1o1zm0ljupsa7xnay7anflwpuned0czvcc89nbmmiozmipjxvldqjcjcuu45ya6l4x0yht2rjopckchercyvdvzx1vuawkkovnyb1k0x6gb164b33famt2eh1u9e7rppwuy2z382b2fxoepqzzwempsozffuybu4s7sdf19db9b96ecq3ljoht09au1t3gdp74lg6fsrucxwlwmrkr5ce6vrnoe5z6y2mm9we3pbwi9ylkzpd5gg6b39n64ed69k8hv84qz4ggfglj5gcq3kbcwevuomqt2cwf0af2k4pmqewhccxxp05f3cig7hvixjqyd26mhwsht3liwh5ejm90780yaye2jf5x346ukglq6eco5f3kyujg8b56u35zg42am0jz59beis4z6iis9t5ej7o2s6042t7ug432olg9ymatu8wvg4cee1vhnqp575kij0prj1zantsjbpc3ldp4savpzkob1dnno96kymk4nhiczrvddspygrip5m4i3oh96tt9uffh2vmj9t656nx52wuwkmyx14jiplpp51dcaoumwy4wki8krlse6dgtihfyte6zyhblh7c8l0lamnhxphb1riuf2c42kooxv21x9o7pxjvy25sw76e3hk6s1pjnxbv0 == \z\0\6\v\u\c\g\5\5\g\6\p\m\4\m\w\w\8\v\g\o\k\6\y\w\j\4\6\j\x\w\v\q\o\f\p\u\b\j\n\v\s\c\6\m\b\x\k\x\i\0\t\l\n\u\w\r\p\l\2\w\4\p\h\a\9\6\g\y\1\q\q\l\4\z\d\w\b\b\k\h\h\5\r\f\p\7\m\z\r\d\v\n\a\l\b\0\o\6\p\l\x\3\q\d\4\w\i\q\l\3\y\e\c\6\2\9\r\n\p\t\h\o\f\u\2\b\0\s\o\z\x\6\8\0\u\5\o\y\v\s\t\k\a\1\q\1\l\9\z\x\e\m\3\w\o\3\u\l\l\r\j\o\e\4\i\s\x\6\v\x\p\f\k\i\e\g\d\8\o\q\b\3\m\1\m\6\f\g\5\d\s\x\8\t\8\5\l\h\j\r\p\1\o\a\x\w\6\n\c\t\i\p\b\f\b\7\7\a\2\f\n\4\o\x\w\7\f\g\w\y\g\e\u\s\4\z\y\l\b\g\0\9\3\h\a\l\r\b\k\e\j\q\c\a\t\e\z\r\e\p\b\i\w\q\r\o\x\7\o\a\h\c\u\w\h\p\w\6\8\o\0\l\2\a\8\p\f\r\6\z\1\7\2\e\z\8\r\g\a\k\0\n\6\4\y\c\l\f\s\a\2\h\e\9\z\r\m\i\z\s\7\j\l\c\p\l\c\z\d\g\j\q\4\k\m\7\c\b\1\p\e\0\p\6\0\a\m\p\1\t\r\m\8\c\b\f\f\0\4\m\e\5\z\9\q\m\o\9\v\b\0\h\q\9\o\d\h\1\o\1\z\m\0\l\j\u\p\s\a\7\x\n\a\y\7\a\n\f\l\w\p\u\n\e\d\0\c\z\v\c\c\8\9\n\b\m\m\i\o\z\m\i\p\j\x\v\l\d\q\j\c\j\c\u\u\4\5\y\a\6\l\4\x\0\y\h\t\2\r\j\o\p\c\k\c\h\e\r\c\y\v\d\v\z\x\1\v\u\a\w\k\k\o\v\n\y\b\1\k\0\x\6\g\b\1\6\4\b\3\3\f\a\m\t\2\e\h\1\u\9\e\7\r\p\p\w\u\y\2\z\3\8\2\b\2\f\x\o\e\p\q\z\z\w\e\m\p\s\o\z\f\f\u\y\b\u\4\s\7\s\d\f\1\9\d\b\9\b\9\6\e\c\q\3\l\j\o\h\t\0\9\a\u\1\t\3\g\d\p\7\4\l\g\6\f\s\r\u\c\x\w\l\w\m\r\k\r\5\c\e\6\v\r\n\o\e\5\z\6\y\2\m\m\9\w\e\3\p\b\w\i\9\y\l\k\z\p\d\5\g\g\6\b\3\9\n\6\4\e\d\6\9\k\8\h\v\8\4\q\z\4\g\g\f\g\l\j\5\g\c\q\3\k\b\c\w\e\v\u\o\m\q\t\2\c\w\f\0\a\f\2\k\4\p\m\q\e\w\h\c\c\x\x\p\0\5\f\3\c\i\g\7\h\v\i\x\j\q\y\d\2\6\m\h\w\s\h\t\3\l\i\w\h\5\e\j\m\9\0\7\8\0\y\a\y\e\2\j\f\5\x\3\4\6\u\k\g\l\q\6\e\c\o\5\f\3\k\y\u\j\g\8\b\5\6\u\3\5\z\g\4\2\a\m\0\j\z\5\9\b\e\i\s\4\z\6\i\i\s\9\t\5\e\j\7\o\2\s\6\0\4\2\t\7\u\g\4\3\2\o\l\g\9\y\m\a\t\u\8\w\v\g\4\c\e\e\1\v\h\n\q\p\5\7\5\k\i\j\0\p\r\j\1\z\a\n\t\s\j\b\p\c\3\l\d\p\4\s\a\v\p\z\k\o\b\1\d\n\n\o\9\6\k\y\m\k\4\n\h\i\c\z\r\v\d\d\s\p\y\g\r\i\p\5\m\4\i\3\o\h\9\6\t\t\9\u\f\f\h\2\v\m\j\9\t\6\5\6\n\x\5\2\w\u\w\k\m\y\x\1\4\j\i\p\l\p\p\5\1\d\c\a\o\u\m\w\y\4\w\k\i\8\k\r\l\s\e\6\d\g\t\i\h\f\y\t\e\6\z\y\h\b\l\h\7\c\8\l\0\l\a\m\n\h\x\p\h\b\1\r\i\u\f\2\c\4\2\k\o\o\x\v\2\1\x\9\o\7\p\x\j\v\y\2\5\s\w\7\6\e\3\h\k\6\s\1\p\j\n\x\b\v\0 ]] 00:09:00.821 04:12:13 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:01.387 04:12:13 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:01.387 04:12:13 -- dd/uring.sh@75 -- # gen_conf 00:09:01.387 04:12:13 -- dd/common.sh@31 -- # xtrace_disable 00:09:01.387 04:12:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.387 [2024-12-06 04:12:13.696584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.387 [2024-12-06 04:12:13.696670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71416 ] 00:09:01.387 { 00:09:01.387 "subsystems": [ 00:09:01.387 { 00:09:01.387 "subsystem": "bdev", 00:09:01.387 "config": [ 00:09:01.387 { 00:09:01.387 "params": { 00:09:01.387 "block_size": 512, 00:09:01.387 "num_blocks": 1048576, 00:09:01.387 "name": "malloc0" 00:09:01.387 }, 00:09:01.387 "method": "bdev_malloc_create" 00:09:01.387 }, 00:09:01.387 { 00:09:01.387 "params": { 00:09:01.387 "filename": "/dev/zram1", 00:09:01.387 "name": "uring0" 00:09:01.387 }, 00:09:01.387 "method": "bdev_uring_create" 00:09:01.387 }, 00:09:01.387 { 00:09:01.387 "method": "bdev_wait_for_examine" 00:09:01.387 } 00:09:01.387 ] 00:09:01.387 } 00:09:01.387 ] 00:09:01.387 } 00:09:01.387 [2024-12-06 04:12:13.831741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.387 [2024-12-06 04:12:13.903752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.763  [2024-12-06T04:12:16.270Z] Copying: 153/512 [MB] (153 MBps) [2024-12-06T04:12:17.206Z] Copying: 315/512 [MB] (162 MBps) [2024-12-06T04:12:17.465Z] Copying: 471/512 [MB] (156 MBps) [2024-12-06T04:12:18.033Z] Copying: 512/512 [MB] (average 156 MBps) 00:09:05.468 00:09:05.468 04:12:17 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:05.468 04:12:17 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:05.468 04:12:17 -- dd/uring.sh@87 -- # : 00:09:05.468 04:12:17 -- dd/uring.sh@87 -- # : 00:09:05.468 04:12:17 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:05.468 04:12:17 -- dd/uring.sh@87 -- # gen_conf 00:09:05.468 04:12:17 -- dd/common.sh@31 -- # xtrace_disable 00:09:05.468 04:12:17 -- common/autotest_common.sh@10 -- # set +x 00:09:05.468 [2024-12-06 04:12:17.877834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:05.468 [2024-12-06 04:12:17.877956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71472 ] 00:09:05.468 { 00:09:05.468 "subsystems": [ 00:09:05.468 { 00:09:05.468 "subsystem": "bdev", 00:09:05.468 "config": [ 00:09:05.468 { 00:09:05.468 "params": { 00:09:05.468 "block_size": 512, 00:09:05.468 "num_blocks": 1048576, 00:09:05.468 "name": "malloc0" 00:09:05.468 }, 00:09:05.468 "method": "bdev_malloc_create" 00:09:05.468 }, 00:09:05.468 { 00:09:05.468 "params": { 00:09:05.468 "filename": "/dev/zram1", 00:09:05.468 "name": "uring0" 00:09:05.468 }, 00:09:05.468 "method": "bdev_uring_create" 00:09:05.468 }, 00:09:05.468 { 00:09:05.468 "params": { 00:09:05.468 "name": "uring0" 00:09:05.468 }, 00:09:05.468 "method": "bdev_uring_delete" 00:09:05.468 }, 00:09:05.468 { 00:09:05.468 "method": "bdev_wait_for_examine" 00:09:05.468 } 00:09:05.468 ] 00:09:05.468 } 00:09:05.468 ] 00:09:05.468 } 00:09:05.468 [2024-12-06 04:12:18.019509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.727 [2024-12-06 04:12:18.105820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.987  [2024-12-06T04:12:18.811Z] Copying: 0/0 [B] (average 0 Bps) 00:09:06.246 00:09:06.247 04:12:18 -- dd/uring.sh@94 -- # : 00:09:06.247 04:12:18 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:06.247 04:12:18 -- dd/uring.sh@94 -- # gen_conf 00:09:06.247 04:12:18 -- common/autotest_common.sh@650 -- # local es=0 00:09:06.247 04:12:18 -- dd/common.sh@31 -- # xtrace_disable 00:09:06.247 04:12:18 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:06.247 04:12:18 -- common/autotest_common.sh@10 -- # set +x 00:09:06.247 04:12:18 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.247 04:12:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.247 04:12:18 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.247 04:12:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.247 04:12:18 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.247 04:12:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.247 04:12:18 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:06.247 04:12:18 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:06.247 04:12:18 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:06.506 [2024-12-06 04:12:18.855926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.506 [2024-12-06 04:12:18.856031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71507 ] 00:09:06.506 { 00:09:06.506 "subsystems": [ 00:09:06.506 { 00:09:06.506 "subsystem": "bdev", 00:09:06.506 "config": [ 00:09:06.506 { 00:09:06.506 "params": { 00:09:06.506 "block_size": 512, 00:09:06.506 "num_blocks": 1048576, 00:09:06.506 "name": "malloc0" 00:09:06.506 }, 00:09:06.506 "method": "bdev_malloc_create" 00:09:06.506 }, 00:09:06.506 { 00:09:06.506 "params": { 00:09:06.506 "filename": "/dev/zram1", 00:09:06.506 "name": "uring0" 00:09:06.506 }, 00:09:06.506 "method": "bdev_uring_create" 00:09:06.506 }, 00:09:06.506 { 00:09:06.506 "params": { 00:09:06.506 "name": "uring0" 00:09:06.506 }, 00:09:06.506 "method": "bdev_uring_delete" 00:09:06.506 }, 00:09:06.506 { 00:09:06.506 "method": "bdev_wait_for_examine" 00:09:06.506 } 00:09:06.506 ] 00:09:06.506 } 00:09:06.506 ] 00:09:06.506 } 00:09:06.506 [2024-12-06 04:12:18.994631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.506 [2024-12-06 04:12:19.057198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.763 [2024-12-06 04:12:19.307607] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:06.763 [2024-12-06 04:12:19.307653] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:06.764 [2024-12-06 04:12:19.307664] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:09:06.764 [2024-12-06 04:12:19.307685] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.329 [2024-12-06 04:12:19.648140] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:07.329 04:12:19 -- common/autotest_common.sh@653 -- # es=237 00:09:07.329 04:12:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.329 04:12:19 -- common/autotest_common.sh@662 -- # es=109 00:09:07.329 04:12:19 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:07.329 04:12:19 -- common/autotest_common.sh@670 -- # es=1 00:09:07.329 04:12:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.329 04:12:19 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:07.329 04:12:19 -- dd/common.sh@172 -- # local id=1 00:09:07.329 04:12:19 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:07.329 04:12:19 -- dd/common.sh@176 -- # echo 1 00:09:07.329 04:12:19 -- dd/common.sh@177 -- # echo 1 00:09:07.329 04:12:19 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:07.587 00:09:07.587 real 0m16.281s 00:09:07.587 user 0m9.307s 00:09:07.587 sys 0m6.332s 00:09:07.587 04:12:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.587 ************************************ 00:09:07.587 END TEST dd_uring_copy 00:09:07.587 04:12:19 -- common/autotest_common.sh@10 -- # set +x 00:09:07.587 ************************************ 00:09:07.587 00:09:07.587 real 0m16.534s 00:09:07.587 user 0m9.448s 00:09:07.587 sys 0m6.445s 00:09:07.587 04:12:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.587 ************************************ 00:09:07.587 END TEST spdk_dd_uring 00:09:07.587 ************************************ 00:09:07.587 04:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.587 04:12:20 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:07.587 04:12:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.587 04:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.587 04:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.587 ************************************ 00:09:07.587 START TEST spdk_dd_sparse 00:09:07.587 ************************************ 00:09:07.587 04:12:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:07.846 * Looking for test storage... 00:09:07.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:07.847 04:12:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:07.847 04:12:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:07.847 04:12:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:07.847 04:12:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:07.847 04:12:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:07.847 04:12:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:07.847 04:12:20 -- scripts/common.sh@335 -- # IFS=.-: 00:09:07.847 04:12:20 -- scripts/common.sh@335 -- # read -ra ver1 00:09:07.847 04:12:20 -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.847 04:12:20 -- scripts/common.sh@336 -- # read -ra ver2 00:09:07.847 04:12:20 -- scripts/common.sh@337 -- # local 'op=<' 00:09:07.847 04:12:20 -- scripts/common.sh@339 -- # ver1_l=2 00:09:07.847 04:12:20 -- scripts/common.sh@340 -- # ver2_l=1 00:09:07.847 04:12:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:07.847 04:12:20 -- scripts/common.sh@343 -- # case "$op" in 00:09:07.847 04:12:20 -- scripts/common.sh@344 -- # : 1 00:09:07.847 04:12:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:07.847 04:12:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.847 04:12:20 -- scripts/common.sh@364 -- # decimal 1 00:09:07.847 04:12:20 -- scripts/common.sh@352 -- # local d=1 00:09:07.847 04:12:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.847 04:12:20 -- scripts/common.sh@354 -- # echo 1 00:09:07.847 04:12:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:07.847 04:12:20 -- scripts/common.sh@365 -- # decimal 2 00:09:07.847 04:12:20 -- scripts/common.sh@352 -- # local d=2 00:09:07.847 04:12:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.847 04:12:20 -- scripts/common.sh@354 -- # echo 2 00:09:07.847 04:12:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:07.847 04:12:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:07.847 04:12:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:07.847 04:12:20 -- scripts/common.sh@367 -- # return 0 00:09:07.847 04:12:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.847 --rc genhtml_branch_coverage=1 00:09:07.847 --rc genhtml_function_coverage=1 00:09:07.847 --rc genhtml_legend=1 00:09:07.847 --rc geninfo_all_blocks=1 00:09:07.847 --rc geninfo_unexecuted_blocks=1 00:09:07.847 00:09:07.847 ' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.847 --rc genhtml_branch_coverage=1 00:09:07.847 --rc genhtml_function_coverage=1 00:09:07.847 --rc genhtml_legend=1 00:09:07.847 --rc geninfo_all_blocks=1 00:09:07.847 --rc geninfo_unexecuted_blocks=1 00:09:07.847 00:09:07.847 ' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.847 --rc genhtml_branch_coverage=1 00:09:07.847 --rc genhtml_function_coverage=1 00:09:07.847 --rc genhtml_legend=1 00:09:07.847 --rc geninfo_all_blocks=1 00:09:07.847 --rc geninfo_unexecuted_blocks=1 00:09:07.847 00:09:07.847 ' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:07.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.847 --rc genhtml_branch_coverage=1 00:09:07.847 --rc genhtml_function_coverage=1 00:09:07.847 --rc genhtml_legend=1 00:09:07.847 --rc geninfo_all_blocks=1 00:09:07.847 --rc geninfo_unexecuted_blocks=1 00:09:07.847 00:09:07.847 ' 00:09:07.847 04:12:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.847 04:12:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.847 04:12:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.847 04:12:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.847 04:12:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.847 04:12:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.847 04:12:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.847 04:12:20 -- paths/export.sh@5 -- # export PATH 00:09:07.847 04:12:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.847 04:12:20 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:07.847 04:12:20 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:07.847 04:12:20 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:07.847 04:12:20 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:07.847 04:12:20 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:07.847 04:12:20 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:07.847 04:12:20 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:07.847 04:12:20 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:07.847 04:12:20 -- dd/sparse.sh@118 -- # prepare 00:09:07.847 04:12:20 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:07.847 04:12:20 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:07.847 1+0 records in 00:09:07.847 1+0 records out 00:09:07.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00785523 s, 534 MB/s 00:09:07.847 04:12:20 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:07.847 1+0 records in 00:09:07.847 1+0 records out 00:09:07.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00779328 s, 538 MB/s 00:09:07.847 04:12:20 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:07.847 1+0 records in 00:09:07.847 1+0 records out 00:09:07.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0052478 s, 799 MB/s 00:09:07.847 04:12:20 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:07.847 04:12:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.847 04:12:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.847 04:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.847 ************************************ 00:09:07.847 START TEST dd_sparse_file_to_file 00:09:07.847 ************************************ 00:09:07.847 04:12:20 -- common/autotest_common.sh@1114 -- # file_to_file 00:09:07.847 04:12:20 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:07.847 04:12:20 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:07.847 04:12:20 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:07.847 04:12:20 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:07.847 04:12:20 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:07.847 04:12:20 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:07.847 04:12:20 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:07.847 04:12:20 -- dd/sparse.sh@41 -- # gen_conf 00:09:07.847 04:12:20 -- dd/common.sh@31 -- # xtrace_disable 00:09:07.847 04:12:20 -- common/autotest_common.sh@10 -- # set +x 00:09:07.847 [2024-12-06 04:12:20.366569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:07.847 [2024-12-06 04:12:20.366660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71610 ] 00:09:07.847 { 00:09:07.847 "subsystems": [ 00:09:07.847 { 00:09:07.847 "subsystem": "bdev", 00:09:07.847 "config": [ 00:09:07.847 { 00:09:07.847 "params": { 00:09:07.847 "block_size": 4096, 00:09:07.847 "filename": "dd_sparse_aio_disk", 00:09:07.847 "name": "dd_aio" 00:09:07.847 }, 00:09:07.847 "method": "bdev_aio_create" 00:09:07.847 }, 00:09:07.847 { 00:09:07.847 "params": { 00:09:07.847 "lvs_name": "dd_lvstore", 00:09:07.847 "bdev_name": "dd_aio" 00:09:07.847 }, 00:09:07.847 "method": "bdev_lvol_create_lvstore" 00:09:07.847 }, 00:09:07.847 { 00:09:07.847 "method": "bdev_wait_for_examine" 00:09:07.847 } 00:09:07.847 ] 00:09:07.847 } 00:09:07.847 ] 00:09:07.847 } 00:09:08.106 [2024-12-06 04:12:20.505718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.106 [2024-12-06 04:12:20.582686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.365  [2024-12-06T04:12:21.189Z] Copying: 12/36 [MB] (average 1200 MBps) 00:09:08.624 00:09:08.624 04:12:20 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:08.624 04:12:20 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:08.624 04:12:21 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:08.624 04:12:21 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:08.624 04:12:21 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:08.624 04:12:21 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:08.624 04:12:21 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:08.624 04:12:21 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:08.624 04:12:21 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:08.624 04:12:21 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:08.624 00:09:08.624 real 0m0.700s 00:09:08.624 user 0m0.385s 00:09:08.624 sys 0m0.215s 00:09:08.624 04:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:08.624 ************************************ 00:09:08.624 END TEST dd_sparse_file_to_file 00:09:08.624 ************************************ 00:09:08.624 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 04:12:21 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:08.624 04:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.624 04:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.624 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 ************************************ 00:09:08.624 START TEST dd_sparse_file_to_bdev 00:09:08.624 ************************************ 00:09:08.624 04:12:21 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:09:08.624 04:12:21 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:08.624 04:12:21 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:08.624 04:12:21 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:08.624 04:12:21 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:08.624 04:12:21 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:08.624 04:12:21 -- dd/sparse.sh@73 -- # gen_conf 00:09:08.624 04:12:21 -- dd/common.sh@31 -- # xtrace_disable 00:09:08.624 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 [2024-12-06 04:12:21.116885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:08.624 [2024-12-06 04:12:21.116979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:09:08.624 { 00:09:08.624 "subsystems": [ 00:09:08.625 { 00:09:08.625 "subsystem": "bdev", 00:09:08.625 "config": [ 00:09:08.625 { 00:09:08.625 "params": { 00:09:08.625 "block_size": 4096, 00:09:08.625 "filename": "dd_sparse_aio_disk", 00:09:08.625 "name": "dd_aio" 00:09:08.625 }, 00:09:08.625 "method": "bdev_aio_create" 00:09:08.625 }, 00:09:08.625 { 00:09:08.625 "params": { 00:09:08.625 "lvs_name": "dd_lvstore", 00:09:08.625 "lvol_name": "dd_lvol", 00:09:08.625 "size": 37748736, 00:09:08.625 "thin_provision": true 00:09:08.625 }, 00:09:08.625 "method": "bdev_lvol_create" 00:09:08.625 }, 00:09:08.625 { 00:09:08.625 "method": "bdev_wait_for_examine" 00:09:08.625 } 00:09:08.625 ] 00:09:08.625 } 00:09:08.625 ] 00:09:08.625 } 00:09:08.883 [2024-12-06 04:12:21.257841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.883 [2024-12-06 04:12:21.338479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.883 [2024-12-06 04:12:21.443049] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:09.142  [2024-12-06T04:12:21.707Z] Copying: 12/36 [MB] (average 480 MBps)[2024-12-06 04:12:21.488184] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:09.400 00:09:09.400 00:09:09.400 00:09:09.400 real 0m0.692s 00:09:09.400 user 0m0.436s 00:09:09.400 sys 0m0.185s 00:09:09.400 04:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.400 ************************************ 00:09:09.400 END TEST dd_sparse_file_to_bdev 00:09:09.400 ************************************ 00:09:09.400 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:09.400 04:12:21 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:09.400 04:12:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:09.400 04:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.400 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:09.400 ************************************ 00:09:09.400 START TEST dd_sparse_bdev_to_file 00:09:09.400 ************************************ 00:09:09.400 04:12:21 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:09:09.400 04:12:21 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:09.400 04:12:21 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:09.400 04:12:21 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:09.400 04:12:21 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:09.400 04:12:21 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:09.400 04:12:21 -- dd/sparse.sh@91 -- # gen_conf 00:09:09.400 04:12:21 -- dd/common.sh@31 -- # xtrace_disable 00:09:09.400 04:12:21 -- common/autotest_common.sh@10 -- # set +x 00:09:09.400 [2024-12-06 04:12:21.861628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:09.400 [2024-12-06 04:12:21.861726] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71683 ] 00:09:09.400 { 00:09:09.400 "subsystems": [ 00:09:09.400 { 00:09:09.400 "subsystem": "bdev", 00:09:09.400 "config": [ 00:09:09.400 { 00:09:09.400 "params": { 00:09:09.400 "block_size": 4096, 00:09:09.400 "filename": "dd_sparse_aio_disk", 00:09:09.400 "name": "dd_aio" 00:09:09.400 }, 00:09:09.400 "method": "bdev_aio_create" 00:09:09.400 }, 00:09:09.400 { 00:09:09.400 "method": "bdev_wait_for_examine" 00:09:09.400 } 00:09:09.400 ] 00:09:09.400 } 00:09:09.400 ] 00:09:09.400 } 00:09:09.658 [2024-12-06 04:12:22.004373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.658 [2024-12-06 04:12:22.079882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.658  [2024-12-06T04:12:22.516Z] Copying: 12/36 [MB] (average 857 MBps) 00:09:09.951 00:09:09.951 04:12:22 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:09.951 04:12:22 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:09.951 04:12:22 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:09.951 04:12:22 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:09.951 04:12:22 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:09.951 04:12:22 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:09.951 04:12:22 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:09.951 04:12:22 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:09.951 04:12:22 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:09.951 04:12:22 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:09.951 00:09:09.951 real 0m0.677s 00:09:09.951 user 0m0.383s 00:09:09.951 sys 0m0.217s 00:09:09.951 04:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.951 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:09.951 ************************************ 00:09:09.951 END TEST dd_sparse_bdev_to_file 00:09:09.951 ************************************ 00:09:10.235 04:12:22 -- dd/sparse.sh@1 -- # cleanup 00:09:10.235 04:12:22 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:10.235 04:12:22 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:10.235 04:12:22 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:10.235 04:12:22 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:10.235 00:09:10.235 real 0m2.471s 00:09:10.235 user 0m1.377s 00:09:10.235 sys 0m0.848s 00:09:10.235 04:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.235 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.235 ************************************ 00:09:10.235 END TEST spdk_dd_sparse 00:09:10.235 ************************************ 00:09:10.235 04:12:22 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:10.235 04:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.235 04:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.235 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.235 ************************************ 00:09:10.235 START TEST spdk_dd_negative 00:09:10.235 ************************************ 00:09:10.235 04:12:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:10.235 * Looking for test storage... 00:09:10.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:10.235 04:12:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:10.235 04:12:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:10.235 04:12:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:10.235 04:12:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:10.235 04:12:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:10.235 04:12:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:10.235 04:12:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:10.235 04:12:22 -- scripts/common.sh@335 -- # IFS=.-: 00:09:10.235 04:12:22 -- scripts/common.sh@335 -- # read -ra ver1 00:09:10.235 04:12:22 -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.235 04:12:22 -- scripts/common.sh@336 -- # read -ra ver2 00:09:10.235 04:12:22 -- scripts/common.sh@337 -- # local 'op=<' 00:09:10.235 04:12:22 -- scripts/common.sh@339 -- # ver1_l=2 00:09:10.235 04:12:22 -- scripts/common.sh@340 -- # ver2_l=1 00:09:10.235 04:12:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:10.235 04:12:22 -- scripts/common.sh@343 -- # case "$op" in 00:09:10.235 04:12:22 -- scripts/common.sh@344 -- # : 1 00:09:10.235 04:12:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:10.235 04:12:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.235 04:12:22 -- scripts/common.sh@364 -- # decimal 1 00:09:10.235 04:12:22 -- scripts/common.sh@352 -- # local d=1 00:09:10.235 04:12:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.235 04:12:22 -- scripts/common.sh@354 -- # echo 1 00:09:10.235 04:12:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:10.235 04:12:22 -- scripts/common.sh@365 -- # decimal 2 00:09:10.236 04:12:22 -- scripts/common.sh@352 -- # local d=2 00:09:10.236 04:12:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.236 04:12:22 -- scripts/common.sh@354 -- # echo 2 00:09:10.236 04:12:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:10.236 04:12:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:10.236 04:12:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:10.236 04:12:22 -- scripts/common.sh@367 -- # return 0 00:09:10.236 04:12:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.236 04:12:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:10.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.236 --rc genhtml_branch_coverage=1 00:09:10.236 --rc genhtml_function_coverage=1 00:09:10.236 --rc genhtml_legend=1 00:09:10.236 --rc geninfo_all_blocks=1 00:09:10.236 --rc geninfo_unexecuted_blocks=1 00:09:10.236 00:09:10.236 ' 00:09:10.236 04:12:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:10.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.236 --rc genhtml_branch_coverage=1 00:09:10.236 --rc genhtml_function_coverage=1 00:09:10.236 --rc genhtml_legend=1 00:09:10.236 --rc geninfo_all_blocks=1 00:09:10.236 --rc geninfo_unexecuted_blocks=1 00:09:10.236 00:09:10.236 ' 00:09:10.236 04:12:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:10.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.236 --rc genhtml_branch_coverage=1 00:09:10.236 --rc genhtml_function_coverage=1 00:09:10.236 --rc genhtml_legend=1 00:09:10.236 --rc geninfo_all_blocks=1 00:09:10.236 --rc geninfo_unexecuted_blocks=1 00:09:10.236 00:09:10.236 ' 00:09:10.236 04:12:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:10.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.236 --rc genhtml_branch_coverage=1 00:09:10.236 --rc genhtml_function_coverage=1 00:09:10.236 --rc genhtml_legend=1 00:09:10.236 --rc geninfo_all_blocks=1 00:09:10.236 --rc geninfo_unexecuted_blocks=1 00:09:10.236 00:09:10.236 ' 00:09:10.236 04:12:22 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.236 04:12:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.236 04:12:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.236 04:12:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.236 04:12:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.236 04:12:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.236 04:12:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.236 04:12:22 -- paths/export.sh@5 -- # export PATH 00:09:10.236 04:12:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.236 04:12:22 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.236 04:12:22 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:10.236 04:12:22 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.236 04:12:22 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:10.236 04:12:22 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:10.236 04:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.236 04:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.236 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.496 ************************************ 00:09:10.496 START TEST dd_invalid_arguments 00:09:10.496 ************************************ 00:09:10.496 04:12:22 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:09:10.496 04:12:22 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:10.496 04:12:22 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.496 04:12:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:10.496 04:12:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.496 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.496 04:12:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.496 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.496 04:12:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.496 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.496 04:12:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.496 04:12:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.496 04:12:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:10.496 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:10.496 options: 00:09:10.496 -c, --config JSON config file (default none) 00:09:10.496 --json JSON config file (default none) 00:09:10.496 --json-ignore-init-errors 00:09:10.496 don't exit on invalid config entry 00:09:10.496 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:10.496 -g, --single-file-segments 00:09:10.496 force creating just one hugetlbfs file 00:09:10.496 -h, --help show this usage 00:09:10.496 -i, --shm-id shared memory ID (optional) 00:09:10.496 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:10.496 --lcores lcore to CPU mapping list. The list is in the format: 00:09:10.496 [<,lcores[@CPUs]>...] 00:09:10.496 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:10.496 Within the group, '-' is used for range separator, 00:09:10.496 ',' is used for single number separator. 00:09:10.496 '( )' can be omitted for single element group, 00:09:10.496 '@' can be omitted if cpus and lcores have the same value 00:09:10.496 -n, --mem-channels channel number of memory channels used for DPDK 00:09:10.496 -p, --main-core main (primary) core for DPDK 00:09:10.496 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:10.496 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:10.496 --disable-cpumask-locks Disable CPU core lock files. 00:09:10.496 --silence-noticelog disable notice level logging to stderr 00:09:10.496 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:10.496 -u, --no-pci disable PCI access 00:09:10.496 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:10.496 --max-delay maximum reactor delay (in microseconds) 00:09:10.496 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:10.496 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:10.496 -R, --huge-unlink unlink huge files after initialization 00:09:10.496 -v, --version print SPDK version 00:09:10.496 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:10.496 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:10.496 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:10.496 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:10.496 Tracepoints vary in size and can use more than one trace entry. 00:09:10.496 --rpcs-allowed comma-separated list of permitted RPCS 00:09:10.496 --env-context Opaque context for use of the env implementation 00:09:10.496 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:10.496 --no-huge run without using hugepages 00:09:10.496 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:10.496 -e, --tpoint-group [:] 00:09:10.496 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:10.496 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:10.496 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:10.496 [2024-12-06 04:12:22.852863] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:10.496 can be combined (e.g. thread,bdev:0x1). 00:09:10.496 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:10.496 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:10.496 [--------- DD Options ---------] 00:09:10.496 --if Input file. Must specify either --if or --ib. 00:09:10.496 --ib Input bdev. Must specifier either --if or --ib 00:09:10.496 --of Output file. Must specify either --of or --ob. 00:09:10.496 --ob Output bdev. Must specify either --of or --ob. 00:09:10.496 --iflag Input file flags. 00:09:10.496 --oflag Output file flags. 00:09:10.496 --bs I/O unit size (default: 4096) 00:09:10.496 --qd Queue depth (default: 2) 00:09:10.496 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:10.496 --skip Skip this many I/O units at start of input. (default: 0) 00:09:10.496 --seek Skip this many I/O units at start of output. (default: 0) 00:09:10.496 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:10.496 --sparse Enable hole skipping in input target 00:09:10.496 Available iflag and oflag values: 00:09:10.496 append - append mode 00:09:10.497 direct - use direct I/O for data 00:09:10.497 directory - fail unless a directory 00:09:10.497 dsync - use synchronized I/O for data 00:09:10.497 noatime - do not update access time 00:09:10.497 noctty - do not assign controlling terminal from file 00:09:10.497 nofollow - do not follow symlinks 00:09:10.497 nonblock - use non-blocking I/O 00:09:10.497 sync - use synchronized I/O for data and metadata 00:09:10.497 04:12:22 -- common/autotest_common.sh@653 -- # es=2 00:09:10.497 04:12:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.497 04:12:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.497 04:12:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.497 00:09:10.497 real 0m0.068s 00:09:10.497 user 0m0.039s 00:09:10.497 sys 0m0.029s 00:09:10.497 04:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.497 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.497 ************************************ 00:09:10.497 END TEST dd_invalid_arguments 00:09:10.497 ************************************ 00:09:10.497 04:12:22 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:10.497 04:12:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.497 04:12:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.497 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.497 ************************************ 00:09:10.497 START TEST dd_double_input 00:09:10.497 ************************************ 00:09:10.497 04:12:22 -- common/autotest_common.sh@1114 -- # double_input 00:09:10.497 04:12:22 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:10.497 04:12:22 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.497 04:12:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:10.497 04:12:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.497 04:12:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:10.497 [2024-12-06 04:12:22.975635] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:10.497 04:12:22 -- common/autotest_common.sh@653 -- # es=22 00:09:10.497 04:12:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.497 04:12:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.497 04:12:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.497 00:09:10.497 real 0m0.072s 00:09:10.497 user 0m0.043s 00:09:10.497 sys 0m0.028s 00:09:10.497 04:12:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.497 04:12:22 -- common/autotest_common.sh@10 -- # set +x 00:09:10.497 ************************************ 00:09:10.497 END TEST dd_double_input 00:09:10.497 ************************************ 00:09:10.497 04:12:23 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:10.497 04:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.497 04:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.497 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.497 ************************************ 00:09:10.497 START TEST dd_double_output 00:09:10.497 ************************************ 00:09:10.497 04:12:23 -- common/autotest_common.sh@1114 -- # double_output 00:09:10.497 04:12:23 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:10.497 04:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.497 04:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:10.497 04:12:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.497 04:12:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.497 04:12:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.497 04:12:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:10.756 [2024-12-06 04:12:23.094463] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:10.756 04:12:23 -- common/autotest_common.sh@653 -- # es=22 00:09:10.756 04:12:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.756 04:12:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.757 04:12:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.757 00:09:10.757 real 0m0.069s 00:09:10.757 user 0m0.045s 00:09:10.757 sys 0m0.022s 00:09:10.757 04:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.757 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.757 ************************************ 00:09:10.757 END TEST dd_double_output 00:09:10.757 ************************************ 00:09:10.757 04:12:23 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:10.757 04:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.757 04:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.757 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.757 ************************************ 00:09:10.757 START TEST dd_no_input 00:09:10.757 ************************************ 00:09:10.757 04:12:23 -- common/autotest_common.sh@1114 -- # no_input 00:09:10.757 04:12:23 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:10.757 04:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.757 04:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:10.757 04:12:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.757 04:12:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:10.757 [2024-12-06 04:12:23.217077] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:10.757 04:12:23 -- common/autotest_common.sh@653 -- # es=22 00:09:10.757 04:12:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.757 04:12:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.757 04:12:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.757 00:09:10.757 real 0m0.073s 00:09:10.757 user 0m0.043s 00:09:10.757 sys 0m0.029s 00:09:10.757 04:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.757 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.757 ************************************ 00:09:10.757 END TEST dd_no_input 00:09:10.757 ************************************ 00:09:10.757 04:12:23 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:10.757 04:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.757 04:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.757 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:10.757 ************************************ 00:09:10.757 START TEST dd_no_output 00:09:10.757 ************************************ 00:09:10.757 04:12:23 -- common/autotest_common.sh@1114 -- # no_output 00:09:10.757 04:12:23 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.757 04:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.757 04:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.757 04:12:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:10.757 04:12:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:10.757 04:12:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:11.016 [2024-12-06 04:12:23.341831] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:11.016 04:12:23 -- common/autotest_common.sh@653 -- # es=22 00:09:11.016 04:12:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.016 04:12:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.016 04:12:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.016 00:09:11.016 real 0m0.069s 00:09:11.016 user 0m0.041s 00:09:11.016 sys 0m0.027s 00:09:11.016 04:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.016 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:11.016 ************************************ 00:09:11.016 END TEST dd_no_output 00:09:11.016 ************************************ 00:09:11.016 04:12:23 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:11.016 04:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.016 04:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.016 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:11.016 ************************************ 00:09:11.016 START TEST dd_wrong_blocksize 00:09:11.016 ************************************ 00:09:11.016 04:12:23 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:09:11.016 04:12:23 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:11.016 04:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:09:11.016 04:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:11.016 04:12:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.016 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.016 04:12:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.016 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.016 04:12:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.016 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.016 04:12:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.017 04:12:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.017 04:12:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:11.017 [2024-12-06 04:12:23.459529] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:11.017 04:12:23 -- common/autotest_common.sh@653 -- # es=22 00:09:11.017 04:12:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.017 04:12:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.017 04:12:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.017 00:09:11.017 real 0m0.066s 00:09:11.017 user 0m0.035s 00:09:11.017 sys 0m0.030s 00:09:11.017 04:12:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.017 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 ************************************ 00:09:11.017 END TEST dd_wrong_blocksize 00:09:11.017 ************************************ 00:09:11.017 04:12:23 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:11.017 04:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.017 04:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.017 04:12:23 -- common/autotest_common.sh@10 -- # set +x 00:09:11.017 ************************************ 00:09:11.017 START TEST dd_smaller_blocksize 00:09:11.017 ************************************ 00:09:11.017 04:12:23 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:09:11.017 04:12:23 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:11.017 04:12:23 -- common/autotest_common.sh@650 -- # local es=0 00:09:11.017 04:12:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:11.017 04:12:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.017 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.017 04:12:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.017 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.017 04:12:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.017 04:12:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.017 04:12:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.017 04:12:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.017 04:12:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:11.275 [2024-12-06 04:12:23.588670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.275 [2024-12-06 04:12:23.588802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71911 ] 00:09:11.275 [2024-12-06 04:12:23.734661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.275 [2024-12-06 04:12:23.811296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.534 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:11.534 [2024-12-06 04:12:23.902066] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:11.534 [2024-12-06 04:12:23.902100] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.534 [2024-12-06 04:12:24.023778] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:11.793 04:12:24 -- common/autotest_common.sh@653 -- # es=244 00:09:11.793 04:12:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.793 04:12:24 -- common/autotest_common.sh@662 -- # es=116 00:09:11.793 04:12:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@670 -- # es=1 00:09:11.793 04:12:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.793 00:09:11.793 real 0m0.576s 00:09:11.793 user 0m0.304s 00:09:11.793 sys 0m0.167s 00:09:11.793 04:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.793 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 END TEST dd_smaller_blocksize 00:09:11.793 ************************************ 00:09:11.793 04:12:24 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:11.793 04:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.793 04:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.793 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 START TEST dd_invalid_count 00:09:11.793 ************************************ 00:09:11.793 04:12:24 -- common/autotest_common.sh@1114 -- # invalid_count 00:09:11.793 04:12:24 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:11.793 04:12:24 -- common/autotest_common.sh@650 -- # local es=0 00:09:11.793 04:12:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:11.793 04:12:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.793 04:12:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:11.793 [2024-12-06 04:12:24.209664] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:11.793 04:12:24 -- common/autotest_common.sh@653 -- # es=22 00:09:11.793 04:12:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.793 04:12:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.793 04:12:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.793 00:09:11.793 real 0m0.068s 00:09:11.793 user 0m0.043s 00:09:11.793 sys 0m0.024s 00:09:11.793 04:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.793 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 END TEST dd_invalid_count 00:09:11.793 ************************************ 00:09:11.793 04:12:24 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:11.793 04:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.793 04:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.793 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 START TEST dd_invalid_oflag 00:09:11.793 ************************************ 00:09:11.793 04:12:24 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:09:11.793 04:12:24 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:11.793 04:12:24 -- common/autotest_common.sh@650 -- # local es=0 00:09:11.793 04:12:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:11.793 04:12:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:11.793 04:12:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:11.793 04:12:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:11.793 [2024-12-06 04:12:24.328403] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:11.793 04:12:24 -- common/autotest_common.sh@653 -- # es=22 00:09:11.793 04:12:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.793 04:12:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.793 04:12:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.793 00:09:11.793 real 0m0.068s 00:09:11.793 user 0m0.045s 00:09:11.793 sys 0m0.021s 00:09:11.793 04:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.793 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:11.793 ************************************ 00:09:11.793 END TEST dd_invalid_oflag 00:09:11.793 ************************************ 00:09:12.053 04:12:24 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:12.053 04:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.053 04:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.053 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:12.053 ************************************ 00:09:12.053 START TEST dd_invalid_iflag 00:09:12.053 ************************************ 00:09:12.053 04:12:24 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:09:12.053 04:12:24 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:12.053 04:12:24 -- common/autotest_common.sh@650 -- # local es=0 00:09:12.053 04:12:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:12.053 04:12:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.053 04:12:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:12.053 [2024-12-06 04:12:24.451657] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:12.053 04:12:24 -- common/autotest_common.sh@653 -- # es=22 00:09:12.053 04:12:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.053 04:12:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:12.053 04:12:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.053 00:09:12.053 real 0m0.074s 00:09:12.053 user 0m0.048s 00:09:12.053 sys 0m0.024s 00:09:12.053 04:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.053 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:12.053 ************************************ 00:09:12.053 END TEST dd_invalid_iflag 00:09:12.053 ************************************ 00:09:12.053 04:12:24 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:12.053 04:12:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.053 04:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.053 04:12:24 -- common/autotest_common.sh@10 -- # set +x 00:09:12.053 ************************************ 00:09:12.053 START TEST dd_unknown_flag 00:09:12.053 ************************************ 00:09:12.053 04:12:24 -- common/autotest_common.sh@1114 -- # unknown_flag 00:09:12.053 04:12:24 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:12.053 04:12:24 -- common/autotest_common.sh@650 -- # local es=0 00:09:12.053 04:12:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:12.053 04:12:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.053 04:12:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.053 04:12:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:12.053 [2024-12-06 04:12:24.573732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:12.053 [2024-12-06 04:12:24.573817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71999 ] 00:09:12.312 [2024-12-06 04:12:24.711385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.312 [2024-12-06 04:12:24.792365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.572 [2024-12-06 04:12:24.880471] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:12.572 [2024-12-06 04:12:24.880553] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:12.572 [2024-12-06 04:12:24.880566] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:12.572 [2024-12-06 04:12:24.880579] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:12.572 [2024-12-06 04:12:24.995098] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:12.572 04:12:25 -- common/autotest_common.sh@653 -- # es=236 00:09:12.572 04:12:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:12.572 04:12:25 -- common/autotest_common.sh@662 -- # es=108 00:09:12.572 04:12:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:12.572 04:12:25 -- common/autotest_common.sh@670 -- # es=1 00:09:12.572 04:12:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:12.572 00:09:12.572 real 0m0.555s 00:09:12.572 user 0m0.291s 00:09:12.572 sys 0m0.159s 00:09:12.572 04:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.572 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.572 ************************************ 00:09:12.572 END TEST dd_unknown_flag 00:09:12.572 ************************************ 00:09:12.572 04:12:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:12.572 04:12:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.572 04:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.572 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:12.572 ************************************ 00:09:12.572 START TEST dd_invalid_json 00:09:12.572 ************************************ 00:09:12.572 04:12:25 -- common/autotest_common.sh@1114 -- # invalid_json 00:09:12.572 04:12:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:12.572 04:12:25 -- dd/negative_dd.sh@95 -- # : 00:09:12.572 04:12:25 -- common/autotest_common.sh@650 -- # local es=0 00:09:12.572 04:12:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:12.572 04:12:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.572 04:12:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.572 04:12:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.572 04:12:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.572 04:12:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.831 04:12:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.831 04:12:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:12.831 04:12:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:12.831 04:12:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:12.831 [2024-12-06 04:12:25.188349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:12.831 [2024-12-06 04:12:25.188509] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72026 ] 00:09:12.831 [2024-12-06 04:12:25.332407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.089 [2024-12-06 04:12:25.407785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.089 [2024-12-06 04:12:25.407905] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:13.089 [2024-12-06 04:12:25.407926] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:13.089 [2024-12-06 04:12:25.407964] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:13.089 04:12:25 -- common/autotest_common.sh@653 -- # es=234 00:09:13.089 04:12:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.089 04:12:25 -- common/autotest_common.sh@662 -- # es=106 00:09:13.089 ************************************ 00:09:13.089 END TEST dd_invalid_json 00:09:13.089 ************************************ 00:09:13.089 04:12:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:13.089 04:12:25 -- common/autotest_common.sh@670 -- # es=1 00:09:13.089 04:12:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.089 00:09:13.089 real 0m0.355s 00:09:13.089 user 0m0.169s 00:09:13.089 sys 0m0.085s 00:09:13.089 04:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.089 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.089 ************************************ 00:09:13.089 END TEST spdk_dd_negative 00:09:13.089 ************************************ 00:09:13.089 00:09:13.089 real 0m2.924s 00:09:13.089 user 0m1.452s 00:09:13.089 sys 0m1.116s 00:09:13.089 04:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.089 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.089 00:09:13.089 real 1m16.485s 00:09:13.089 user 0m46.607s 00:09:13.089 sys 0m20.751s 00:09:13.089 ************************************ 00:09:13.089 END TEST spdk_dd 00:09:13.089 ************************************ 00:09:13.089 04:12:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.089 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.089 04:12:25 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@255 -- # timing_exit lib 00:09:13.089 04:12:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.089 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.089 04:12:25 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:09:13.089 04:12:25 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:09:13.089 04:12:25 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.089 04:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:13.089 04:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.089 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.089 ************************************ 00:09:13.089 START TEST nvmf_tcp 00:09:13.089 ************************************ 00:09:13.089 04:12:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.347 * Looking for test storage... 00:09:13.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:13.347 04:12:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:13.347 04:12:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:13.347 04:12:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:13.347 04:12:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:13.347 04:12:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:13.347 04:12:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:13.347 04:12:25 -- scripts/common.sh@335 -- # IFS=.-: 00:09:13.347 04:12:25 -- scripts/common.sh@335 -- # read -ra ver1 00:09:13.347 04:12:25 -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.347 04:12:25 -- scripts/common.sh@336 -- # read -ra ver2 00:09:13.347 04:12:25 -- scripts/common.sh@337 -- # local 'op=<' 00:09:13.347 04:12:25 -- scripts/common.sh@339 -- # ver1_l=2 00:09:13.347 04:12:25 -- scripts/common.sh@340 -- # ver2_l=1 00:09:13.347 04:12:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:13.347 04:12:25 -- scripts/common.sh@343 -- # case "$op" in 00:09:13.347 04:12:25 -- scripts/common.sh@344 -- # : 1 00:09:13.347 04:12:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:13.347 04:12:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.347 04:12:25 -- scripts/common.sh@364 -- # decimal 1 00:09:13.347 04:12:25 -- scripts/common.sh@352 -- # local d=1 00:09:13.347 04:12:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.347 04:12:25 -- scripts/common.sh@354 -- # echo 1 00:09:13.347 04:12:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:13.347 04:12:25 -- scripts/common.sh@365 -- # decimal 2 00:09:13.347 04:12:25 -- scripts/common.sh@352 -- # local d=2 00:09:13.347 04:12:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.347 04:12:25 -- scripts/common.sh@354 -- # echo 2 00:09:13.347 04:12:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:13.347 04:12:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:13.347 04:12:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:13.347 04:12:25 -- scripts/common.sh@367 -- # return 0 00:09:13.347 04:12:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.347 --rc genhtml_branch_coverage=1 00:09:13.347 --rc genhtml_function_coverage=1 00:09:13.347 --rc genhtml_legend=1 00:09:13.347 --rc geninfo_all_blocks=1 00:09:13.347 --rc geninfo_unexecuted_blocks=1 00:09:13.347 00:09:13.347 ' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.347 --rc genhtml_branch_coverage=1 00:09:13.347 --rc genhtml_function_coverage=1 00:09:13.347 --rc genhtml_legend=1 00:09:13.347 --rc geninfo_all_blocks=1 00:09:13.347 --rc geninfo_unexecuted_blocks=1 00:09:13.347 00:09:13.347 ' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.347 --rc genhtml_branch_coverage=1 00:09:13.347 --rc genhtml_function_coverage=1 00:09:13.347 --rc genhtml_legend=1 00:09:13.347 --rc geninfo_all_blocks=1 00:09:13.347 --rc geninfo_unexecuted_blocks=1 00:09:13.347 00:09:13.347 ' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.347 --rc genhtml_branch_coverage=1 00:09:13.347 --rc genhtml_function_coverage=1 00:09:13.347 --rc genhtml_legend=1 00:09:13.347 --rc geninfo_all_blocks=1 00:09:13.347 --rc geninfo_unexecuted_blocks=1 00:09:13.347 00:09:13.347 ' 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.347 04:12:25 -- nvmf/common.sh@7 -- # uname -s 00:09:13.347 04:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.347 04:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.347 04:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.347 04:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.347 04:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.347 04:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.347 04:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.347 04:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.347 04:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.347 04:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.347 04:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:13.347 04:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:13.347 04:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.347 04:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.347 04:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.347 04:12:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.347 04:12:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.347 04:12:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.347 04:12:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.347 04:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.347 04:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.347 04:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.347 04:12:25 -- paths/export.sh@5 -- # export PATH 00:09:13.347 04:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.347 04:12:25 -- nvmf/common.sh@46 -- # : 0 00:09:13.347 04:12:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:13.347 04:12:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:13.347 04:12:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:13.347 04:12:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.347 04:12:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.347 04:12:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:13.347 04:12:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:13.347 04:12:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:13.347 04:12:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.347 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:13.347 04:12:25 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:13.347 04:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:13.347 04:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.347 04:12:25 -- common/autotest_common.sh@10 -- # set +x 00:09:13.347 ************************************ 00:09:13.347 START TEST nvmf_host_management 00:09:13.347 ************************************ 00:09:13.347 04:12:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:13.607 * Looking for test storage... 00:09:13.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.607 04:12:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:13.607 04:12:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:13.607 04:12:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:13.607 04:12:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:13.607 04:12:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:13.607 04:12:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:13.607 04:12:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:13.607 04:12:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:13.607 04:12:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:13.607 04:12:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.607 04:12:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:13.607 04:12:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:13.607 04:12:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:13.607 04:12:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:13.607 04:12:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:13.607 04:12:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:13.607 04:12:26 -- scripts/common.sh@344 -- # : 1 00:09:13.607 04:12:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:13.607 04:12:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.607 04:12:26 -- scripts/common.sh@364 -- # decimal 1 00:09:13.607 04:12:26 -- scripts/common.sh@352 -- # local d=1 00:09:13.607 04:12:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.607 04:12:26 -- scripts/common.sh@354 -- # echo 1 00:09:13.607 04:12:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:13.607 04:12:26 -- scripts/common.sh@365 -- # decimal 2 00:09:13.607 04:12:26 -- scripts/common.sh@352 -- # local d=2 00:09:13.607 04:12:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.607 04:12:26 -- scripts/common.sh@354 -- # echo 2 00:09:13.607 04:12:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:13.607 04:12:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:13.607 04:12:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:13.607 04:12:26 -- scripts/common.sh@367 -- # return 0 00:09:13.607 04:12:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.607 04:12:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.607 --rc genhtml_branch_coverage=1 00:09:13.607 --rc genhtml_function_coverage=1 00:09:13.607 --rc genhtml_legend=1 00:09:13.607 --rc geninfo_all_blocks=1 00:09:13.607 --rc geninfo_unexecuted_blocks=1 00:09:13.607 00:09:13.607 ' 00:09:13.607 04:12:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.607 --rc genhtml_branch_coverage=1 00:09:13.607 --rc genhtml_function_coverage=1 00:09:13.607 --rc genhtml_legend=1 00:09:13.607 --rc geninfo_all_blocks=1 00:09:13.607 --rc geninfo_unexecuted_blocks=1 00:09:13.607 00:09:13.607 ' 00:09:13.607 04:12:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.607 --rc genhtml_branch_coverage=1 00:09:13.607 --rc genhtml_function_coverage=1 00:09:13.607 --rc genhtml_legend=1 00:09:13.607 --rc geninfo_all_blocks=1 00:09:13.607 --rc geninfo_unexecuted_blocks=1 00:09:13.607 00:09:13.607 ' 00:09:13.607 04:12:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:13.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.607 --rc genhtml_branch_coverage=1 00:09:13.607 --rc genhtml_function_coverage=1 00:09:13.607 --rc genhtml_legend=1 00:09:13.607 --rc geninfo_all_blocks=1 00:09:13.607 --rc geninfo_unexecuted_blocks=1 00:09:13.607 00:09:13.607 ' 00:09:13.607 04:12:26 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.607 04:12:26 -- nvmf/common.sh@7 -- # uname -s 00:09:13.607 04:12:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.607 04:12:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.607 04:12:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.607 04:12:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.607 04:12:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.607 04:12:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.607 04:12:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.607 04:12:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.607 04:12:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.607 04:12:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:13.607 04:12:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:13.607 04:12:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.607 04:12:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.607 04:12:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.607 04:12:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.607 04:12:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.607 04:12:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.607 04:12:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.607 04:12:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.607 04:12:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.607 04:12:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.607 04:12:26 -- paths/export.sh@5 -- # export PATH 00:09:13.607 04:12:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.607 04:12:26 -- nvmf/common.sh@46 -- # : 0 00:09:13.607 04:12:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:13.607 04:12:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:13.607 04:12:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:13.607 04:12:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.607 04:12:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.607 04:12:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:13.607 04:12:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:13.607 04:12:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:13.607 04:12:26 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.607 04:12:26 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.607 04:12:26 -- target/host_management.sh@104 -- # nvmftestinit 00:09:13.607 04:12:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:13.607 04:12:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.607 04:12:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:13.607 04:12:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:13.607 04:12:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:13.607 04:12:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.607 04:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.607 04:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.607 04:12:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:13.607 04:12:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:13.607 04:12:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.607 04:12:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.607 04:12:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.607 04:12:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:13.607 04:12:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.607 04:12:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.607 04:12:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.608 04:12:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.608 04:12:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.608 04:12:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.608 04:12:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.608 04:12:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.608 04:12:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:13.608 Cannot find device "nvmf_init_br" 00:09:13.608 04:12:26 -- nvmf/common.sh@153 -- # true 00:09:13.608 04:12:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:13.608 Cannot find device "nvmf_tgt_br" 00:09:13.608 04:12:26 -- nvmf/common.sh@154 -- # true 00:09:13.608 04:12:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.608 Cannot find device "nvmf_tgt_br2" 00:09:13.608 04:12:26 -- nvmf/common.sh@155 -- # true 00:09:13.608 04:12:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:13.608 Cannot find device "nvmf_init_br" 00:09:13.608 04:12:26 -- nvmf/common.sh@156 -- # true 00:09:13.608 04:12:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:13.608 Cannot find device "nvmf_tgt_br" 00:09:13.608 04:12:26 -- nvmf/common.sh@157 -- # true 00:09:13.608 04:12:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:13.866 Cannot find device "nvmf_tgt_br2" 00:09:13.866 04:12:26 -- nvmf/common.sh@158 -- # true 00:09:13.866 04:12:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:13.866 Cannot find device "nvmf_br" 00:09:13.866 04:12:26 -- nvmf/common.sh@159 -- # true 00:09:13.866 04:12:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:13.866 Cannot find device "nvmf_init_if" 00:09:13.866 04:12:26 -- nvmf/common.sh@160 -- # true 00:09:13.866 04:12:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.866 04:12:26 -- nvmf/common.sh@161 -- # true 00:09:13.866 04:12:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.866 04:12:26 -- nvmf/common.sh@162 -- # true 00:09:13.866 04:12:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.867 04:12:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.867 04:12:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.867 04:12:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.867 04:12:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.867 04:12:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.867 04:12:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.867 04:12:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:13.867 04:12:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:13.867 04:12:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:13.867 04:12:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:13.867 04:12:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:13.867 04:12:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:13.867 04:12:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.867 04:12:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.867 04:12:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.867 04:12:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:13.867 04:12:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:13.867 04:12:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.867 04:12:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.867 04:12:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.126 04:12:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.126 04:12:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.126 04:12:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:14.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:09:14.126 00:09:14.126 --- 10.0.0.2 ping statistics --- 00:09:14.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.126 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:09:14.126 04:12:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:14.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:09:14.126 00:09:14.126 --- 10.0.0.3 ping statistics --- 00:09:14.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.126 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:14.126 04:12:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:14.126 00:09:14.126 --- 10.0.0.1 ping statistics --- 00:09:14.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.126 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:14.126 04:12:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.126 04:12:26 -- nvmf/common.sh@421 -- # return 0 00:09:14.126 04:12:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:14.126 04:12:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.126 04:12:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:14.126 04:12:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:14.126 04:12:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.126 04:12:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:14.126 04:12:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:14.126 04:12:26 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:14.126 04:12:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.126 04:12:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.126 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:09:14.126 ************************************ 00:09:14.126 START TEST nvmf_host_management 00:09:14.126 ************************************ 00:09:14.126 04:12:26 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:09:14.126 04:12:26 -- target/host_management.sh@69 -- # starttarget 00:09:14.126 04:12:26 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:14.126 04:12:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:14.126 04:12:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.126 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:09:14.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.126 04:12:26 -- nvmf/common.sh@469 -- # nvmfpid=72305 00:09:14.126 04:12:26 -- nvmf/common.sh@470 -- # waitforlisten 72305 00:09:14.126 04:12:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:14.126 04:12:26 -- common/autotest_common.sh@829 -- # '[' -z 72305 ']' 00:09:14.126 04:12:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.126 04:12:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.126 04:12:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.126 04:12:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.126 04:12:26 -- common/autotest_common.sh@10 -- # set +x 00:09:14.126 [2024-12-06 04:12:26.576341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.126 [2024-12-06 04:12:26.576628] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.384 [2024-12-06 04:12:26.718900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.384 [2024-12-06 04:12:26.802536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.384 [2024-12-06 04:12:26.802714] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.384 [2024-12-06 04:12:26.802731] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.384 [2024-12-06 04:12:26.802742] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.384 [2024-12-06 04:12:26.802877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.384 [2024-12-06 04:12:26.803608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.384 [2024-12-06 04:12:26.803729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.384 [2024-12-06 04:12:26.803739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.318 04:12:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.318 04:12:27 -- common/autotest_common.sh@862 -- # return 0 00:09:15.318 04:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:15.318 04:12:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.318 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.318 04:12:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.318 04:12:27 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.319 04:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.319 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.319 [2024-12-06 04:12:27.651152] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.319 04:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.319 04:12:27 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:15.319 04:12:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.319 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.319 04:12:27 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:15.319 04:12:27 -- target/host_management.sh@23 -- # cat 00:09:15.319 04:12:27 -- target/host_management.sh@30 -- # rpc_cmd 00:09:15.319 04:12:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.319 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.319 Malloc0 00:09:15.319 [2024-12-06 04:12:27.734751] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.319 04:12:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.319 04:12:27 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:15.319 04:12:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.319 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:15.319 04:12:27 -- target/host_management.sh@73 -- # perfpid=72363 00:09:15.319 04:12:27 -- target/host_management.sh@74 -- # waitforlisten 72363 /var/tmp/bdevperf.sock 00:09:15.319 04:12:27 -- common/autotest_common.sh@829 -- # '[' -z 72363 ']' 00:09:15.319 04:12:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:15.319 04:12:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.319 04:12:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:15.319 04:12:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.319 04:12:27 -- common/autotest_common.sh@10 -- # set +x 00:09:15.319 04:12:27 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:15.319 04:12:27 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:15.319 04:12:27 -- nvmf/common.sh@520 -- # config=() 00:09:15.319 04:12:27 -- nvmf/common.sh@520 -- # local subsystem config 00:09:15.319 04:12:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:15.319 04:12:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:15.319 { 00:09:15.319 "params": { 00:09:15.319 "name": "Nvme$subsystem", 00:09:15.319 "trtype": "$TEST_TRANSPORT", 00:09:15.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.319 "adrfam": "ipv4", 00:09:15.319 "trsvcid": "$NVMF_PORT", 00:09:15.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.319 "hdgst": ${hdgst:-false}, 00:09:15.319 "ddgst": ${ddgst:-false} 00:09:15.319 }, 00:09:15.319 "method": "bdev_nvme_attach_controller" 00:09:15.319 } 00:09:15.319 EOF 00:09:15.319 )") 00:09:15.319 04:12:27 -- nvmf/common.sh@542 -- # cat 00:09:15.319 04:12:27 -- nvmf/common.sh@544 -- # jq . 00:09:15.319 04:12:27 -- nvmf/common.sh@545 -- # IFS=, 00:09:15.319 04:12:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:15.319 "params": { 00:09:15.319 "name": "Nvme0", 00:09:15.319 "trtype": "tcp", 00:09:15.319 "traddr": "10.0.0.2", 00:09:15.319 "adrfam": "ipv4", 00:09:15.319 "trsvcid": "4420", 00:09:15.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:15.319 "hdgst": false, 00:09:15.319 "ddgst": false 00:09:15.319 }, 00:09:15.319 "method": "bdev_nvme_attach_controller" 00:09:15.319 }' 00:09:15.319 [2024-12-06 04:12:27.837537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.319 [2024-12-06 04:12:27.837624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:09:15.578 [2024-12-06 04:12:27.979998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.578 [2024-12-06 04:12:28.050903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.836 Running I/O for 10 seconds... 00:09:16.406 04:12:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.406 04:12:28 -- common/autotest_common.sh@862 -- # return 0 00:09:16.406 04:12:28 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:16.406 04:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.406 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.406 04:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.406 04:12:28 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.406 04:12:28 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:16.406 04:12:28 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:16.406 04:12:28 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:16.406 04:12:28 -- target/host_management.sh@52 -- # local ret=1 00:09:16.406 04:12:28 -- target/host_management.sh@53 -- # local i 00:09:16.406 04:12:28 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:16.406 04:12:28 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:16.406 04:12:28 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:16.406 04:12:28 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:16.406 04:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.406 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.406 04:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.406 04:12:28 -- target/host_management.sh@55 -- # read_io_count=1744 00:09:16.406 04:12:28 -- target/host_management.sh@58 -- # '[' 1744 -ge 100 ']' 00:09:16.406 04:12:28 -- target/host_management.sh@59 -- # ret=0 00:09:16.406 04:12:28 -- target/host_management.sh@60 -- # break 00:09:16.406 04:12:28 -- target/host_management.sh@64 -- # return 0 00:09:16.406 04:12:28 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:16.406 04:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.406 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.406 04:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.406 04:12:28 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:16.406 04:12:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.406 [2024-12-06 04:12:28.901277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 04:12:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.406 id:0 cdw10:00000000 cdw11:00000000 00:09:16.406 [2024-12-06 04:12:28.902874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.902901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:16.406 [2024-12-06 04:12:28.902912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.902922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:16.406 [2024-12-06 04:12:28.902932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.902949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:16.406 [2024-12-06 04:12:28.902965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.902979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616da0 is same with the state(5) to be set 00:09:16.406 [2024-12-06 04:12:28.903611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.406 [2024-12-06 04:12:28.903956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.406 [2024-12-06 04:12:28.903972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.903984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.903995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.904987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.904998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.905007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.905018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.905028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.905048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.905059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.407 [2024-12-06 04:12:28.905069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.407 [2024-12-06 04:12:28.905079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.408 [2024-12-06 04:12:28.905351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.408 [2024-12-06 04:12:28.905361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1615460 is same with the state(5) to be set 00:09:16.408 [2024-12-06 04:12:28.905624] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1615460 was disconnected and freed. reset controller. 00:09:16.408 [2024-12-06 04:12:28.906982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:16.408 task offset: 114816 on job bdev=Nvme0n1 fails 00:09:16.408 00:09:16.408 Latency(us) 00:09:16.408 [2024-12-06T04:12:28.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.408 [2024-12-06T04:12:28.973Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:16.408 [2024-12-06T04:12:28.973Z] Job: Nvme0n1 ended in about 0.68 seconds with error 00:09:16.408 Verification LBA range: start 0x0 length 0x400 00:09:16.408 Nvme0n1 : 0.68 2809.05 175.57 94.47 0.00 21700.24 2606.55 28120.90 00:09:16.408 [2024-12-06T04:12:28.973Z] =================================================================================================================== 00:09:16.408 [2024-12-06T04:12:28.973Z] Total : 2809.05 175.57 94.47 0.00 21700.24 2606.55 28120.90 00:09:16.408 [2024-12-06 04:12:28.909165] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:16.408 [2024-12-06 04:12:28.909291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616da0 (9): Bad file descriptor 00:09:16.408 04:12:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.408 04:12:28 -- target/host_management.sh@87 -- # sleep 1 00:09:16.408 [2024-12-06 04:12:28.917124] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:17.784 04:12:29 -- target/host_management.sh@91 -- # kill -9 72363 00:09:17.784 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72363) - No such process 00:09:17.784 04:12:29 -- target/host_management.sh@91 -- # true 00:09:17.784 04:12:29 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:17.784 04:12:29 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:17.784 04:12:29 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:17.784 04:12:29 -- nvmf/common.sh@520 -- # config=() 00:09:17.784 04:12:29 -- nvmf/common.sh@520 -- # local subsystem config 00:09:17.784 04:12:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:17.784 04:12:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:17.784 { 00:09:17.784 "params": { 00:09:17.784 "name": "Nvme$subsystem", 00:09:17.784 "trtype": "$TEST_TRANSPORT", 00:09:17.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.784 "adrfam": "ipv4", 00:09:17.784 "trsvcid": "$NVMF_PORT", 00:09:17.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.784 "hdgst": ${hdgst:-false}, 00:09:17.784 "ddgst": ${ddgst:-false} 00:09:17.784 }, 00:09:17.784 "method": "bdev_nvme_attach_controller" 00:09:17.784 } 00:09:17.784 EOF 00:09:17.784 )") 00:09:17.784 04:12:29 -- nvmf/common.sh@542 -- # cat 00:09:17.784 04:12:29 -- nvmf/common.sh@544 -- # jq . 00:09:17.784 04:12:29 -- nvmf/common.sh@545 -- # IFS=, 00:09:17.784 04:12:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:17.784 "params": { 00:09:17.784 "name": "Nvme0", 00:09:17.784 "trtype": "tcp", 00:09:17.784 "traddr": "10.0.0.2", 00:09:17.784 "adrfam": "ipv4", 00:09:17.784 "trsvcid": "4420", 00:09:17.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:17.784 "hdgst": false, 00:09:17.784 "ddgst": false 00:09:17.784 }, 00:09:17.784 "method": "bdev_nvme_attach_controller" 00:09:17.784 }' 00:09:17.784 [2024-12-06 04:12:29.972208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:17.784 [2024-12-06 04:12:29.972312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:09:17.784 [2024-12-06 04:12:30.112945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.784 [2024-12-06 04:12:30.192353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.043 Running I/O for 1 seconds... 00:09:19.042 00:09:19.042 Latency(us) 00:09:19.042 [2024-12-06T04:12:31.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.042 [2024-12-06T04:12:31.607Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:19.042 Verification LBA range: start 0x0 length 0x400 00:09:19.042 Nvme0n1 : 1.02 2816.27 176.02 0.00 0.00 22370.51 1124.54 28240.06 00:09:19.042 [2024-12-06T04:12:31.607Z] =================================================================================================================== 00:09:19.042 [2024-12-06T04:12:31.607Z] Total : 2816.27 176.02 0.00 0.00 22370.51 1124.54 28240.06 00:09:19.300 04:12:31 -- target/host_management.sh@101 -- # stoptarget 00:09:19.300 04:12:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:19.300 04:12:31 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:19.300 04:12:31 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:19.300 04:12:31 -- target/host_management.sh@40 -- # nvmftestfini 00:09:19.300 04:12:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:19.300 04:12:31 -- nvmf/common.sh@116 -- # sync 00:09:19.300 04:12:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:19.300 04:12:31 -- nvmf/common.sh@119 -- # set +e 00:09:19.300 04:12:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:19.300 04:12:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:19.300 rmmod nvme_tcp 00:09:19.300 rmmod nvme_fabrics 00:09:19.300 rmmod nvme_keyring 00:09:19.300 04:12:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:19.300 04:12:31 -- nvmf/common.sh@123 -- # set -e 00:09:19.300 04:12:31 -- nvmf/common.sh@124 -- # return 0 00:09:19.300 04:12:31 -- nvmf/common.sh@477 -- # '[' -n 72305 ']' 00:09:19.300 04:12:31 -- nvmf/common.sh@478 -- # killprocess 72305 00:09:19.300 04:12:31 -- common/autotest_common.sh@936 -- # '[' -z 72305 ']' 00:09:19.300 04:12:31 -- common/autotest_common.sh@940 -- # kill -0 72305 00:09:19.300 04:12:31 -- common/autotest_common.sh@941 -- # uname 00:09:19.300 04:12:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.300 04:12:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72305 00:09:19.300 killing process with pid 72305 00:09:19.300 04:12:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:19.300 04:12:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:19.300 04:12:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72305' 00:09:19.300 04:12:31 -- common/autotest_common.sh@955 -- # kill 72305 00:09:19.300 04:12:31 -- common/autotest_common.sh@960 -- # wait 72305 00:09:19.559 [2024-12-06 04:12:31.997924] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:19.559 04:12:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:19.559 04:12:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:19.559 04:12:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:19.559 04:12:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.559 04:12:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:19.559 04:12:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.559 04:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.559 04:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.559 04:12:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:19.559 00:09:19.559 real 0m5.560s 00:09:19.559 user 0m23.355s 00:09:19.559 sys 0m1.342s 00:09:19.559 04:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.559 ************************************ 00:09:19.559 END TEST nvmf_host_management 00:09:19.559 ************************************ 00:09:19.559 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.559 04:12:32 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:19.559 ************************************ 00:09:19.559 END TEST nvmf_host_management 00:09:19.559 ************************************ 00:09:19.559 00:09:19.559 real 0m6.254s 00:09:19.559 user 0m23.580s 00:09:19.559 sys 0m1.603s 00:09:19.559 04:12:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.559 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.818 04:12:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:19.818 04:12:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.818 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:09:19.818 ************************************ 00:09:19.818 START TEST nvmf_lvol 00:09:19.818 ************************************ 00:09:19.818 04:12:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:19.818 * Looking for test storage... 00:09:19.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.818 04:12:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:19.818 04:12:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:19.818 04:12:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:19.818 04:12:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:19.818 04:12:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:19.818 04:12:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:19.818 04:12:32 -- scripts/common.sh@335 -- # IFS=.-: 00:09:19.818 04:12:32 -- scripts/common.sh@335 -- # read -ra ver1 00:09:19.818 04:12:32 -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.818 04:12:32 -- scripts/common.sh@336 -- # read -ra ver2 00:09:19.818 04:12:32 -- scripts/common.sh@337 -- # local 'op=<' 00:09:19.818 04:12:32 -- scripts/common.sh@339 -- # ver1_l=2 00:09:19.818 04:12:32 -- scripts/common.sh@340 -- # ver2_l=1 00:09:19.818 04:12:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:19.818 04:12:32 -- scripts/common.sh@343 -- # case "$op" in 00:09:19.818 04:12:32 -- scripts/common.sh@344 -- # : 1 00:09:19.818 04:12:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:19.818 04:12:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.818 04:12:32 -- scripts/common.sh@364 -- # decimal 1 00:09:19.818 04:12:32 -- scripts/common.sh@352 -- # local d=1 00:09:19.818 04:12:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.818 04:12:32 -- scripts/common.sh@354 -- # echo 1 00:09:19.818 04:12:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:19.818 04:12:32 -- scripts/common.sh@365 -- # decimal 2 00:09:19.818 04:12:32 -- scripts/common.sh@352 -- # local d=2 00:09:19.818 04:12:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.818 04:12:32 -- scripts/common.sh@354 -- # echo 2 00:09:19.818 04:12:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:19.818 04:12:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:19.818 04:12:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:19.818 04:12:32 -- scripts/common.sh@367 -- # return 0 00:09:19.818 04:12:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.818 --rc genhtml_branch_coverage=1 00:09:19.818 --rc genhtml_function_coverage=1 00:09:19.818 --rc genhtml_legend=1 00:09:19.818 --rc geninfo_all_blocks=1 00:09:19.818 --rc geninfo_unexecuted_blocks=1 00:09:19.818 00:09:19.818 ' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.818 --rc genhtml_branch_coverage=1 00:09:19.818 --rc genhtml_function_coverage=1 00:09:19.818 --rc genhtml_legend=1 00:09:19.818 --rc geninfo_all_blocks=1 00:09:19.818 --rc geninfo_unexecuted_blocks=1 00:09:19.818 00:09:19.818 ' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.818 --rc genhtml_branch_coverage=1 00:09:19.818 --rc genhtml_function_coverage=1 00:09:19.818 --rc genhtml_legend=1 00:09:19.818 --rc geninfo_all_blocks=1 00:09:19.818 --rc geninfo_unexecuted_blocks=1 00:09:19.818 00:09:19.818 ' 00:09:19.818 04:12:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.818 --rc genhtml_branch_coverage=1 00:09:19.818 --rc genhtml_function_coverage=1 00:09:19.818 --rc genhtml_legend=1 00:09:19.818 --rc geninfo_all_blocks=1 00:09:19.818 --rc geninfo_unexecuted_blocks=1 00:09:19.818 00:09:19.818 ' 00:09:19.818 04:12:32 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.818 04:12:32 -- nvmf/common.sh@7 -- # uname -s 00:09:19.818 04:12:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.818 04:12:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.818 04:12:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.818 04:12:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.818 04:12:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.818 04:12:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.818 04:12:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.818 04:12:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.818 04:12:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.818 04:12:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.077 04:12:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:20.077 04:12:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:20.077 04:12:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.077 04:12:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.077 04:12:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:20.077 04:12:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.077 04:12:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.077 04:12:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.077 04:12:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.077 04:12:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.077 04:12:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.077 04:12:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.077 04:12:32 -- paths/export.sh@5 -- # export PATH 00:09:20.077 04:12:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.077 04:12:32 -- nvmf/common.sh@46 -- # : 0 00:09:20.077 04:12:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:20.077 04:12:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:20.077 04:12:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:20.077 04:12:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.077 04:12:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.077 04:12:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:20.077 04:12:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:20.077 04:12:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.077 04:12:32 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:20.077 04:12:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:20.077 04:12:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.077 04:12:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:20.077 04:12:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:20.077 04:12:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:20.077 04:12:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.077 04:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.077 04:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.077 04:12:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:20.077 04:12:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:20.077 04:12:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:20.077 04:12:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:20.077 04:12:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:20.077 04:12:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:20.077 04:12:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.077 04:12:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.077 04:12:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:20.077 04:12:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:20.077 04:12:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:20.077 04:12:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:20.077 04:12:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:20.077 04:12:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.077 04:12:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:20.077 04:12:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:20.077 04:12:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:20.077 04:12:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:20.077 04:12:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:20.077 04:12:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:20.077 Cannot find device "nvmf_tgt_br" 00:09:20.077 04:12:32 -- nvmf/common.sh@154 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:20.077 Cannot find device "nvmf_tgt_br2" 00:09:20.077 04:12:32 -- nvmf/common.sh@155 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:20.077 04:12:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:20.077 Cannot find device "nvmf_tgt_br" 00:09:20.077 04:12:32 -- nvmf/common.sh@157 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:20.077 Cannot find device "nvmf_tgt_br2" 00:09:20.077 04:12:32 -- nvmf/common.sh@158 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:20.077 04:12:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:20.077 04:12:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:20.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.077 04:12:32 -- nvmf/common.sh@161 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:20.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:20.077 04:12:32 -- nvmf/common.sh@162 -- # true 00:09:20.077 04:12:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:20.077 04:12:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:20.077 04:12:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.077 04:12:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.078 04:12:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.078 04:12:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.078 04:12:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.078 04:12:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:20.078 04:12:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:20.078 04:12:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:20.078 04:12:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:20.078 04:12:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:20.336 04:12:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:20.336 04:12:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.336 04:12:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.336 04:12:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.336 04:12:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:20.336 04:12:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:20.336 04:12:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.336 04:12:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.336 04:12:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.336 04:12:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.336 04:12:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.336 04:12:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:20.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:20.336 00:09:20.336 --- 10.0.0.2 ping statistics --- 00:09:20.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.336 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:20.336 04:12:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:20.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:20.336 00:09:20.336 --- 10.0.0.3 ping statistics --- 00:09:20.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.336 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:20.336 04:12:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:20.336 00:09:20.336 --- 10.0.0.1 ping statistics --- 00:09:20.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.336 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:20.336 04:12:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.336 04:12:32 -- nvmf/common.sh@421 -- # return 0 00:09:20.336 04:12:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:20.336 04:12:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.336 04:12:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:20.336 04:12:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:20.336 04:12:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.336 04:12:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:20.336 04:12:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:20.336 04:12:32 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:20.336 04:12:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:20.336 04:12:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.336 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:09:20.336 04:12:32 -- nvmf/common.sh@469 -- # nvmfpid=72637 00:09:20.336 04:12:32 -- nvmf/common.sh@470 -- # waitforlisten 72637 00:09:20.336 04:12:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:20.336 04:12:32 -- common/autotest_common.sh@829 -- # '[' -z 72637 ']' 00:09:20.336 04:12:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.336 04:12:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.336 04:12:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.336 04:12:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.336 04:12:32 -- common/autotest_common.sh@10 -- # set +x 00:09:20.336 [2024-12-06 04:12:32.822460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.336 [2024-12-06 04:12:32.822576] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.595 [2024-12-06 04:12:32.967844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.595 [2024-12-06 04:12:33.058505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.595 [2024-12-06 04:12:33.058653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.595 [2024-12-06 04:12:33.058665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.595 [2024-12-06 04:12:33.058674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.595 [2024-12-06 04:12:33.058811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.595 [2024-12-06 04:12:33.059300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.595 [2024-12-06 04:12:33.059309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.529 04:12:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.529 04:12:33 -- common/autotest_common.sh@862 -- # return 0 00:09:21.529 04:12:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:21.529 04:12:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.529 04:12:33 -- common/autotest_common.sh@10 -- # set +x 00:09:21.529 04:12:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.529 04:12:33 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.529 [2024-12-06 04:12:34.079378] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.788 04:12:34 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.046 04:12:34 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:22.046 04:12:34 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:22.304 04:12:34 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:22.304 04:12:34 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:22.562 04:12:34 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:22.829 04:12:35 -- target/nvmf_lvol.sh@29 -- # lvs=82e0bb52-1bcb-4c5f-b3f5-3feaa3268f56 00:09:22.829 04:12:35 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 82e0bb52-1bcb-4c5f-b3f5-3feaa3268f56 lvol 20 00:09:23.088 04:12:35 -- target/nvmf_lvol.sh@32 -- # lvol=004cd3b6-bf89-454a-8ed8-e54070fcb41c 00:09:23.088 04:12:35 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:23.346 04:12:35 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 004cd3b6-bf89-454a-8ed8-e54070fcb41c 00:09:23.605 04:12:35 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:23.863 [2024-12-06 04:12:36.236259] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.863 04:12:36 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.122 04:12:36 -- target/nvmf_lvol.sh@42 -- # perf_pid=72712 00:09:24.122 04:12:36 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:24.122 04:12:36 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:25.054 04:12:37 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 004cd3b6-bf89-454a-8ed8-e54070fcb41c MY_SNAPSHOT 00:09:25.312 04:12:37 -- target/nvmf_lvol.sh@47 -- # snapshot=bfe9580d-2ecc-4784-a0b8-fea1270ab9fc 00:09:25.312 04:12:37 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 004cd3b6-bf89-454a-8ed8-e54070fcb41c 30 00:09:25.570 04:12:38 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bfe9580d-2ecc-4784-a0b8-fea1270ab9fc MY_CLONE 00:09:25.827 04:12:38 -- target/nvmf_lvol.sh@49 -- # clone=be50dd18-2fd8-4718-9dc7-38d34db5e504 00:09:25.827 04:12:38 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate be50dd18-2fd8-4718-9dc7-38d34db5e504 00:09:26.390 04:12:38 -- target/nvmf_lvol.sh@53 -- # wait 72712 00:09:34.499 Initializing NVMe Controllers 00:09:34.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:34.499 Controller IO queue size 128, less than required. 00:09:34.499 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:34.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:34.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:34.499 Initialization complete. Launching workers. 00:09:34.499 ======================================================== 00:09:34.499 Latency(us) 00:09:34.499 Device Information : IOPS MiB/s Average min max 00:09:34.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9852.30 38.49 12995.04 2041.54 64550.51 00:09:34.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9967.60 38.94 12846.76 3049.39 63473.26 00:09:34.499 ======================================================== 00:09:34.499 Total : 19819.90 77.42 12920.47 2041.54 64550.51 00:09:34.499 00:09:34.499 04:12:46 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:34.759 04:12:47 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 004cd3b6-bf89-454a-8ed8-e54070fcb41c 00:09:35.017 04:12:47 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82e0bb52-1bcb-4c5f-b3f5-3feaa3268f56 00:09:35.276 04:12:47 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:35.276 04:12:47 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:35.276 04:12:47 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:35.276 04:12:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:35.276 04:12:47 -- nvmf/common.sh@116 -- # sync 00:09:35.276 04:12:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:35.276 04:12:47 -- nvmf/common.sh@119 -- # set +e 00:09:35.276 04:12:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:35.276 04:12:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:35.276 rmmod nvme_tcp 00:09:35.276 rmmod nvme_fabrics 00:09:35.276 rmmod nvme_keyring 00:09:35.276 04:12:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:35.276 04:12:47 -- nvmf/common.sh@123 -- # set -e 00:09:35.276 04:12:47 -- nvmf/common.sh@124 -- # return 0 00:09:35.276 04:12:47 -- nvmf/common.sh@477 -- # '[' -n 72637 ']' 00:09:35.276 04:12:47 -- nvmf/common.sh@478 -- # killprocess 72637 00:09:35.276 04:12:47 -- common/autotest_common.sh@936 -- # '[' -z 72637 ']' 00:09:35.276 04:12:47 -- common/autotest_common.sh@940 -- # kill -0 72637 00:09:35.276 04:12:47 -- common/autotest_common.sh@941 -- # uname 00:09:35.276 04:12:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.276 04:12:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72637 00:09:35.276 04:12:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:35.276 killing process with pid 72637 00:09:35.276 04:12:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:35.277 04:12:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72637' 00:09:35.277 04:12:47 -- common/autotest_common.sh@955 -- # kill 72637 00:09:35.277 04:12:47 -- common/autotest_common.sh@960 -- # wait 72637 00:09:35.536 04:12:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:35.536 04:12:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:35.536 04:12:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:35.536 04:12:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.536 04:12:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:35.536 04:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.536 04:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.536 04:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.536 04:12:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:35.536 00:09:35.536 real 0m15.892s 00:09:35.536 user 1m5.185s 00:09:35.536 sys 0m4.777s 00:09:35.536 04:12:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.536 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:35.536 ************************************ 00:09:35.536 END TEST nvmf_lvol 00:09:35.536 ************************************ 00:09:35.795 04:12:48 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:35.795 04:12:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:35.795 04:12:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.795 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:35.795 ************************************ 00:09:35.795 START TEST nvmf_lvs_grow 00:09:35.795 ************************************ 00:09:35.795 04:12:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:35.795 * Looking for test storage... 00:09:35.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.795 04:12:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:35.795 04:12:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:35.795 04:12:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:35.795 04:12:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:35.795 04:12:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:35.795 04:12:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:35.795 04:12:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:35.795 04:12:48 -- scripts/common.sh@335 -- # IFS=.-: 00:09:35.795 04:12:48 -- scripts/common.sh@335 -- # read -ra ver1 00:09:35.795 04:12:48 -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.795 04:12:48 -- scripts/common.sh@336 -- # read -ra ver2 00:09:35.795 04:12:48 -- scripts/common.sh@337 -- # local 'op=<' 00:09:35.795 04:12:48 -- scripts/common.sh@339 -- # ver1_l=2 00:09:35.795 04:12:48 -- scripts/common.sh@340 -- # ver2_l=1 00:09:35.795 04:12:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:35.795 04:12:48 -- scripts/common.sh@343 -- # case "$op" in 00:09:35.795 04:12:48 -- scripts/common.sh@344 -- # : 1 00:09:35.795 04:12:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:35.795 04:12:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.795 04:12:48 -- scripts/common.sh@364 -- # decimal 1 00:09:35.795 04:12:48 -- scripts/common.sh@352 -- # local d=1 00:09:35.795 04:12:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.795 04:12:48 -- scripts/common.sh@354 -- # echo 1 00:09:35.795 04:12:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:35.795 04:12:48 -- scripts/common.sh@365 -- # decimal 2 00:09:35.795 04:12:48 -- scripts/common.sh@352 -- # local d=2 00:09:35.795 04:12:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.795 04:12:48 -- scripts/common.sh@354 -- # echo 2 00:09:35.795 04:12:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:35.795 04:12:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:35.795 04:12:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:35.795 04:12:48 -- scripts/common.sh@367 -- # return 0 00:09:35.795 04:12:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.795 04:12:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:35.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.795 --rc genhtml_branch_coverage=1 00:09:35.795 --rc genhtml_function_coverage=1 00:09:35.795 --rc genhtml_legend=1 00:09:35.795 --rc geninfo_all_blocks=1 00:09:35.795 --rc geninfo_unexecuted_blocks=1 00:09:35.795 00:09:35.795 ' 00:09:35.795 04:12:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:35.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.795 --rc genhtml_branch_coverage=1 00:09:35.795 --rc genhtml_function_coverage=1 00:09:35.795 --rc genhtml_legend=1 00:09:35.795 --rc geninfo_all_blocks=1 00:09:35.795 --rc geninfo_unexecuted_blocks=1 00:09:35.795 00:09:35.795 ' 00:09:35.795 04:12:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:35.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.795 --rc genhtml_branch_coverage=1 00:09:35.796 --rc genhtml_function_coverage=1 00:09:35.796 --rc genhtml_legend=1 00:09:35.796 --rc geninfo_all_blocks=1 00:09:35.796 --rc geninfo_unexecuted_blocks=1 00:09:35.796 00:09:35.796 ' 00:09:35.796 04:12:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:35.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.796 --rc genhtml_branch_coverage=1 00:09:35.796 --rc genhtml_function_coverage=1 00:09:35.796 --rc genhtml_legend=1 00:09:35.796 --rc geninfo_all_blocks=1 00:09:35.796 --rc geninfo_unexecuted_blocks=1 00:09:35.796 00:09:35.796 ' 00:09:35.796 04:12:48 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.796 04:12:48 -- nvmf/common.sh@7 -- # uname -s 00:09:35.796 04:12:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.796 04:12:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.796 04:12:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.796 04:12:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.796 04:12:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.796 04:12:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.796 04:12:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.796 04:12:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.796 04:12:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.796 04:12:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:35.796 04:12:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:09:35.796 04:12:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.796 04:12:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.796 04:12:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.796 04:12:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.796 04:12:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.796 04:12:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.796 04:12:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.796 04:12:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.796 04:12:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.796 04:12:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.796 04:12:48 -- paths/export.sh@5 -- # export PATH 00:09:35.796 04:12:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.796 04:12:48 -- nvmf/common.sh@46 -- # : 0 00:09:35.796 04:12:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:35.796 04:12:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:35.796 04:12:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:35.796 04:12:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.796 04:12:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.796 04:12:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:35.796 04:12:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:35.796 04:12:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:35.796 04:12:48 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.796 04:12:48 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:35.796 04:12:48 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:35.796 04:12:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:35.796 04:12:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.796 04:12:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:35.796 04:12:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:35.796 04:12:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:35.796 04:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.796 04:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.796 04:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.796 04:12:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:35.796 04:12:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:35.796 04:12:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.796 04:12:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.796 04:12:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:35.796 04:12:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:35.796 04:12:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.796 04:12:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.796 04:12:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.796 04:12:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.796 04:12:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.796 04:12:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.796 04:12:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.796 04:12:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.796 04:12:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:35.796 04:12:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:35.796 Cannot find device "nvmf_tgt_br" 00:09:35.796 04:12:48 -- nvmf/common.sh@154 -- # true 00:09:35.796 04:12:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.796 Cannot find device "nvmf_tgt_br2" 00:09:35.796 04:12:48 -- nvmf/common.sh@155 -- # true 00:09:35.796 04:12:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:36.056 04:12:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:36.056 Cannot find device "nvmf_tgt_br" 00:09:36.056 04:12:48 -- nvmf/common.sh@157 -- # true 00:09:36.056 04:12:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:36.056 Cannot find device "nvmf_tgt_br2" 00:09:36.056 04:12:48 -- nvmf/common.sh@158 -- # true 00:09:36.056 04:12:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:36.056 04:12:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:36.056 04:12:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.056 04:12:48 -- nvmf/common.sh@161 -- # true 00:09:36.056 04:12:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:36.056 04:12:48 -- nvmf/common.sh@162 -- # true 00:09:36.056 04:12:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:36.056 04:12:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:36.056 04:12:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:36.056 04:12:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:36.056 04:12:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:36.056 04:12:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:36.056 04:12:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:36.056 04:12:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:36.056 04:12:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:36.056 04:12:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:36.056 04:12:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:36.056 04:12:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:36.056 04:12:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:36.056 04:12:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:36.056 04:12:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:36.056 04:12:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:36.056 04:12:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:36.056 04:12:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:36.056 04:12:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:36.056 04:12:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:36.056 04:12:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:36.056 04:12:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:36.056 04:12:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:36.056 04:12:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:36.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:09:36.056 00:09:36.056 --- 10.0.0.2 ping statistics --- 00:09:36.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.056 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:36.056 04:12:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:36.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:36.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:09:36.056 00:09:36.056 --- 10.0.0.3 ping statistics --- 00:09:36.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.056 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:36.056 04:12:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:36.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:36.316 00:09:36.316 --- 10.0.0.1 ping statistics --- 00:09:36.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.316 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:36.316 04:12:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.316 04:12:48 -- nvmf/common.sh@421 -- # return 0 00:09:36.316 04:12:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.316 04:12:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.316 04:12:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:36.316 04:12:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:36.316 04:12:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.316 04:12:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:36.316 04:12:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:36.316 04:12:48 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:36.316 04:12:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.316 04:12:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.316 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:36.316 04:12:48 -- nvmf/common.sh@469 -- # nvmfpid=73041 00:09:36.316 04:12:48 -- nvmf/common.sh@470 -- # waitforlisten 73041 00:09:36.316 04:12:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:36.316 04:12:48 -- common/autotest_common.sh@829 -- # '[' -z 73041 ']' 00:09:36.316 04:12:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.316 04:12:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.316 04:12:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.316 04:12:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.316 04:12:48 -- common/autotest_common.sh@10 -- # set +x 00:09:36.316 [2024-12-06 04:12:48.702805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.316 [2024-12-06 04:12:48.702894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.316 [2024-12-06 04:12:48.844346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.575 [2024-12-06 04:12:48.935531] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.575 [2024-12-06 04:12:48.935686] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.575 [2024-12-06 04:12:48.935700] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.575 [2024-12-06 04:12:48.935709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.575 [2024-12-06 04:12:48.935738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.513 04:12:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.513 04:12:49 -- common/autotest_common.sh@862 -- # return 0 00:09:37.513 04:12:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.513 04:12:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.513 04:12:49 -- common/autotest_common.sh@10 -- # set +x 00:09:37.513 04:12:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.513 04:12:49 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:37.513 [2024-12-06 04:12:49.995542] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.513 04:12:50 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:37.513 04:12:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.513 04:12:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.513 04:12:50 -- common/autotest_common.sh@10 -- # set +x 00:09:37.514 ************************************ 00:09:37.514 START TEST lvs_grow_clean 00:09:37.514 ************************************ 00:09:37.514 04:12:50 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:37.514 04:12:50 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.081 04:12:50 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:38.081 04:12:50 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:38.081 04:12:50 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:38.082 04:12:50 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:38.082 04:12:50 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:38.341 04:12:50 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:38.341 04:12:50 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:38.341 04:12:50 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fc696c50-6ed3-4979-bf52-d92bfd421751 lvol 150 00:09:38.628 04:12:51 -- target/nvmf_lvs_grow.sh@33 -- # lvol=efda6ae1-d27f-4ee5-b31a-95f3f03f4791 00:09:38.628 04:12:51 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.628 04:12:51 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:38.887 [2024-12-06 04:12:51.352330] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:38.887 [2024-12-06 04:12:51.352467] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:38.887 true 00:09:38.887 04:12:51 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:38.887 04:12:51 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:39.146 04:12:51 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:39.146 04:12:51 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:39.405 04:12:51 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 efda6ae1-d27f-4ee5-b31a-95f3f03f4791 00:09:39.664 04:12:52 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:39.923 [2024-12-06 04:12:52.304958] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.923 04:12:52 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.184 04:12:52 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:40.184 04:12:52 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73129 00:09:40.184 04:12:52 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.184 04:12:52 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73129 /var/tmp/bdevperf.sock 00:09:40.184 04:12:52 -- common/autotest_common.sh@829 -- # '[' -z 73129 ']' 00:09:40.184 04:12:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.184 04:12:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.184 04:12:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.184 04:12:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.184 04:12:52 -- common/autotest_common.sh@10 -- # set +x 00:09:40.184 [2024-12-06 04:12:52.612080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:40.184 [2024-12-06 04:12:52.612166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73129 ] 00:09:40.184 [2024-12-06 04:12:52.740865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.443 [2024-12-06 04:12:52.819723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.381 04:12:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.381 04:12:53 -- common/autotest_common.sh@862 -- # return 0 00:09:41.381 04:12:53 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:41.640 Nvme0n1 00:09:41.640 04:12:53 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:41.640 [ 00:09:41.640 { 00:09:41.640 "name": "Nvme0n1", 00:09:41.640 "aliases": [ 00:09:41.640 "efda6ae1-d27f-4ee5-b31a-95f3f03f4791" 00:09:41.640 ], 00:09:41.640 "product_name": "NVMe disk", 00:09:41.640 "block_size": 4096, 00:09:41.640 "num_blocks": 38912, 00:09:41.640 "uuid": "efda6ae1-d27f-4ee5-b31a-95f3f03f4791", 00:09:41.640 "assigned_rate_limits": { 00:09:41.641 "rw_ios_per_sec": 0, 00:09:41.641 "rw_mbytes_per_sec": 0, 00:09:41.641 "r_mbytes_per_sec": 0, 00:09:41.641 "w_mbytes_per_sec": 0 00:09:41.641 }, 00:09:41.641 "claimed": false, 00:09:41.641 "zoned": false, 00:09:41.641 "supported_io_types": { 00:09:41.641 "read": true, 00:09:41.641 "write": true, 00:09:41.641 "unmap": true, 00:09:41.641 "write_zeroes": true, 00:09:41.641 "flush": true, 00:09:41.641 "reset": true, 00:09:41.641 "compare": true, 00:09:41.641 "compare_and_write": true, 00:09:41.641 "abort": true, 00:09:41.641 "nvme_admin": true, 00:09:41.641 "nvme_io": true 00:09:41.641 }, 00:09:41.641 "driver_specific": { 00:09:41.641 "nvme": [ 00:09:41.641 { 00:09:41.641 "trid": { 00:09:41.641 "trtype": "TCP", 00:09:41.641 "adrfam": "IPv4", 00:09:41.641 "traddr": "10.0.0.2", 00:09:41.641 "trsvcid": "4420", 00:09:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:41.641 }, 00:09:41.641 "ctrlr_data": { 00:09:41.641 "cntlid": 1, 00:09:41.641 "vendor_id": "0x8086", 00:09:41.641 "model_number": "SPDK bdev Controller", 00:09:41.641 "serial_number": "SPDK0", 00:09:41.641 "firmware_revision": "24.01.1", 00:09:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.641 "oacs": { 00:09:41.641 "security": 0, 00:09:41.641 "format": 0, 00:09:41.641 "firmware": 0, 00:09:41.641 "ns_manage": 0 00:09:41.641 }, 00:09:41.641 "multi_ctrlr": true, 00:09:41.641 "ana_reporting": false 00:09:41.641 }, 00:09:41.641 "vs": { 00:09:41.641 "nvme_version": "1.3" 00:09:41.641 }, 00:09:41.641 "ns_data": { 00:09:41.641 "id": 1, 00:09:41.641 "can_share": true 00:09:41.641 } 00:09:41.641 } 00:09:41.641 ], 00:09:41.641 "mp_policy": "active_passive" 00:09:41.641 } 00:09:41.641 } 00:09:41.641 ] 00:09:41.900 04:12:54 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73153 00:09:41.900 04:12:54 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:41.900 04:12:54 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:41.900 Running I/O for 10 seconds... 00:09:42.837 Latency(us) 00:09:42.837 [2024-12-06T04:12:55.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.837 [2024-12-06T04:12:55.402Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.837 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:42.837 [2024-12-06T04:12:55.402Z] =================================================================================================================== 00:09:42.837 [2024-12-06T04:12:55.402Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:42.837 00:09:43.773 04:12:56 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:44.031 [2024-12-06T04:12:56.596Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.031 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:44.031 [2024-12-06T04:12:56.596Z] =================================================================================================================== 00:09:44.031 [2024-12-06T04:12:56.596Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:44.031 00:09:44.031 true 00:09:44.031 04:12:56 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:44.031 04:12:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:44.289 04:12:56 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:44.289 04:12:56 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:44.289 04:12:56 -- target/nvmf_lvs_grow.sh@65 -- # wait 73153 00:09:44.857 [2024-12-06T04:12:57.422Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.857 Nvme0n1 : 3.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:44.857 [2024-12-06T04:12:57.422Z] =================================================================================================================== 00:09:44.857 [2024-12-06T04:12:57.422Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:44.857 00:09:45.793 [2024-12-06T04:12:58.358Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.793 Nvme0n1 : 4.00 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:09:45.793 [2024-12-06T04:12:58.358Z] =================================================================================================================== 00:09:45.793 [2024-12-06T04:12:58.358Z] Total : 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:09:45.793 00:09:47.190 [2024-12-06T04:12:59.755Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.190 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:47.190 [2024-12-06T04:12:59.755Z] =================================================================================================================== 00:09:47.190 [2024-12-06T04:12:59.755Z] Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:47.190 00:09:48.155 [2024-12-06T04:13:00.720Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.155 Nvme0n1 : 6.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:48.155 [2024-12-06T04:13:00.720Z] =================================================================================================================== 00:09:48.155 [2024-12-06T04:13:00.720Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:48.155 00:09:49.088 [2024-12-06T04:13:01.654Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.089 Nvme0n1 : 7.00 6821.71 26.65 0.00 0.00 0.00 0.00 0.00 00:09:49.089 [2024-12-06T04:13:01.654Z] =================================================================================================================== 00:09:49.089 [2024-12-06T04:13:01.654Z] Total : 6821.71 26.65 0.00 0.00 0.00 0.00 0.00 00:09:49.089 00:09:50.021 [2024-12-06T04:13:02.586Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.021 Nvme0n1 : 8.00 6810.38 26.60 0.00 0.00 0.00 0.00 0.00 00:09:50.021 [2024-12-06T04:13:02.586Z] =================================================================================================================== 00:09:50.021 [2024-12-06T04:13:02.586Z] Total : 6810.38 26.60 0.00 0.00 0.00 0.00 0.00 00:09:50.021 00:09:50.953 [2024-12-06T04:13:03.518Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.953 Nvme0n1 : 9.00 6810.33 26.60 0.00 0.00 0.00 0.00 0.00 00:09:50.953 [2024-12-06T04:13:03.518Z] =================================================================================================================== 00:09:50.953 [2024-12-06T04:13:03.518Z] Total : 6810.33 26.60 0.00 0.00 0.00 0.00 0.00 00:09:50.953 00:09:51.889 [2024-12-06T04:13:04.454Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.889 Nvme0n1 : 10.00 6802.40 26.57 0.00 0.00 0.00 0.00 0.00 00:09:51.889 [2024-12-06T04:13:04.454Z] =================================================================================================================== 00:09:51.889 [2024-12-06T04:13:04.454Z] Total : 6802.40 26.57 0.00 0.00 0.00 0.00 0.00 00:09:51.889 00:09:51.889 00:09:51.889 Latency(us) 00:09:51.889 [2024-12-06T04:13:04.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.889 [2024-12-06T04:13:04.454Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.889 Nvme0n1 : 10.01 6808.37 26.60 0.00 0.00 18795.02 13464.67 73400.32 00:09:51.889 [2024-12-06T04:13:04.454Z] =================================================================================================================== 00:09:51.889 [2024-12-06T04:13:04.454Z] Total : 6808.37 26.60 0.00 0.00 18795.02 13464.67 73400.32 00:09:51.889 0 00:09:51.889 04:13:04 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73129 00:09:51.889 04:13:04 -- common/autotest_common.sh@936 -- # '[' -z 73129 ']' 00:09:51.889 04:13:04 -- common/autotest_common.sh@940 -- # kill -0 73129 00:09:51.889 04:13:04 -- common/autotest_common.sh@941 -- # uname 00:09:51.889 04:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:51.889 04:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73129 00:09:51.889 04:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:51.889 killing process with pid 73129 00:09:51.889 04:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:51.889 04:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73129' 00:09:51.889 Received shutdown signal, test time was about 10.000000 seconds 00:09:51.889 00:09:51.889 Latency(us) 00:09:51.889 [2024-12-06T04:13:04.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.889 [2024-12-06T04:13:04.454Z] =================================================================================================================== 00:09:51.889 [2024-12-06T04:13:04.454Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:51.889 04:13:04 -- common/autotest_common.sh@955 -- # kill 73129 00:09:51.889 04:13:04 -- common/autotest_common.sh@960 -- # wait 73129 00:09:52.147 04:13:04 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:52.405 04:13:04 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:52.405 04:13:04 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:52.664 04:13:05 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:52.664 04:13:05 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:52.664 04:13:05 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.923 [2024-12-06 04:13:05.334060] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:52.923 04:13:05 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:52.923 04:13:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:52.923 04:13:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:52.923 04:13:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.923 04:13:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.923 04:13:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.923 04:13:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.923 04:13:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.923 04:13:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.923 04:13:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.923 04:13:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:52.923 04:13:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:53.180 request: 00:09:53.180 { 00:09:53.180 "uuid": "fc696c50-6ed3-4979-bf52-d92bfd421751", 00:09:53.180 "method": "bdev_lvol_get_lvstores", 00:09:53.180 "req_id": 1 00:09:53.180 } 00:09:53.180 Got JSON-RPC error response 00:09:53.180 response: 00:09:53.180 { 00:09:53.180 "code": -19, 00:09:53.180 "message": "No such device" 00:09:53.180 } 00:09:53.180 04:13:05 -- common/autotest_common.sh@653 -- # es=1 00:09:53.180 04:13:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.181 04:13:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.181 04:13:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.181 04:13:05 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:53.439 aio_bdev 00:09:53.439 04:13:05 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev efda6ae1-d27f-4ee5-b31a-95f3f03f4791 00:09:53.439 04:13:05 -- common/autotest_common.sh@897 -- # local bdev_name=efda6ae1-d27f-4ee5-b31a-95f3f03f4791 00:09:53.439 04:13:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:53.439 04:13:05 -- common/autotest_common.sh@899 -- # local i 00:09:53.439 04:13:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:53.439 04:13:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:53.439 04:13:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:53.698 04:13:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efda6ae1-d27f-4ee5-b31a-95f3f03f4791 -t 2000 00:09:53.957 [ 00:09:53.957 { 00:09:53.957 "name": "efda6ae1-d27f-4ee5-b31a-95f3f03f4791", 00:09:53.957 "aliases": [ 00:09:53.957 "lvs/lvol" 00:09:53.957 ], 00:09:53.957 "product_name": "Logical Volume", 00:09:53.957 "block_size": 4096, 00:09:53.957 "num_blocks": 38912, 00:09:53.957 "uuid": "efda6ae1-d27f-4ee5-b31a-95f3f03f4791", 00:09:53.957 "assigned_rate_limits": { 00:09:53.957 "rw_ios_per_sec": 0, 00:09:53.957 "rw_mbytes_per_sec": 0, 00:09:53.957 "r_mbytes_per_sec": 0, 00:09:53.957 "w_mbytes_per_sec": 0 00:09:53.957 }, 00:09:53.957 "claimed": false, 00:09:53.957 "zoned": false, 00:09:53.957 "supported_io_types": { 00:09:53.957 "read": true, 00:09:53.957 "write": true, 00:09:53.957 "unmap": true, 00:09:53.957 "write_zeroes": true, 00:09:53.957 "flush": false, 00:09:53.957 "reset": true, 00:09:53.957 "compare": false, 00:09:53.957 "compare_and_write": false, 00:09:53.957 "abort": false, 00:09:53.957 "nvme_admin": false, 00:09:53.957 "nvme_io": false 00:09:53.957 }, 00:09:53.957 "driver_specific": { 00:09:53.957 "lvol": { 00:09:53.957 "lvol_store_uuid": "fc696c50-6ed3-4979-bf52-d92bfd421751", 00:09:53.957 "base_bdev": "aio_bdev", 00:09:53.957 "thin_provision": false, 00:09:53.957 "snapshot": false, 00:09:53.957 "clone": false, 00:09:53.957 "esnap_clone": false 00:09:53.957 } 00:09:53.957 } 00:09:53.957 } 00:09:53.957 ] 00:09:53.957 04:13:06 -- common/autotest_common.sh@905 -- # return 0 00:09:53.957 04:13:06 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:53.957 04:13:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:54.215 04:13:06 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:54.215 04:13:06 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:54.215 04:13:06 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:54.475 04:13:06 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:54.475 04:13:06 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete efda6ae1-d27f-4ee5-b31a-95f3f03f4791 00:09:54.733 04:13:07 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc696c50-6ed3-4979-bf52-d92bfd421751 00:09:54.992 04:13:07 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:55.250 04:13:07 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:55.509 ************************************ 00:09:55.509 END TEST lvs_grow_clean 00:09:55.509 ************************************ 00:09:55.509 00:09:55.509 real 0m18.003s 00:09:55.509 user 0m16.906s 00:09:55.509 sys 0m2.698s 00:09:55.509 04:13:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.509 04:13:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.509 04:13:08 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:55.509 04:13:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:55.509 04:13:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.509 04:13:08 -- common/autotest_common.sh@10 -- # set +x 00:09:55.768 ************************************ 00:09:55.768 START TEST lvs_grow_dirty 00:09:55.768 ************************************ 00:09:55.768 04:13:08 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:55.768 04:13:08 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.026 04:13:08 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:56.026 04:13:08 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:56.285 04:13:08 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9419d7ca-d934-44d2-a17f-72b4af96adfa 00:09:56.285 04:13:08 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:09:56.285 04:13:08 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:56.544 04:13:08 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:56.544 04:13:08 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:56.544 04:13:08 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9419d7ca-d934-44d2-a17f-72b4af96adfa lvol 150 00:09:56.803 04:13:09 -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe332ef2-5b71-44df-a2c7-943b55357c2c 00:09:56.803 04:13:09 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:56.803 04:13:09 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:57.062 [2024-12-06 04:13:09.413308] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:57.062 [2024-12-06 04:13:09.413403] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:57.062 true 00:09:57.062 04:13:09 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:09:57.062 04:13:09 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:57.321 04:13:09 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:57.321 04:13:09 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:57.579 04:13:09 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe332ef2-5b71-44df-a2c7-943b55357c2c 00:09:57.838 04:13:10 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:58.097 04:13:10 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.356 04:13:10 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73393 00:09:58.356 04:13:10 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:58.356 04:13:10 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.356 04:13:10 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73393 /var/tmp/bdevperf.sock 00:09:58.357 04:13:10 -- common/autotest_common.sh@829 -- # '[' -z 73393 ']' 00:09:58.357 04:13:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:58.357 04:13:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:58.357 04:13:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:58.357 04:13:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.357 04:13:10 -- common/autotest_common.sh@10 -- # set +x 00:09:58.357 [2024-12-06 04:13:10.723031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:58.357 [2024-12-06 04:13:10.723149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73393 ] 00:09:58.357 [2024-12-06 04:13:10.862713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.615 [2024-12-06 04:13:10.941948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.182 04:13:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.182 04:13:11 -- common/autotest_common.sh@862 -- # return 0 00:09:59.183 04:13:11 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:59.750 Nvme0n1 00:09:59.750 04:13:12 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:59.750 [ 00:09:59.750 { 00:09:59.750 "name": "Nvme0n1", 00:09:59.750 "aliases": [ 00:09:59.750 "fe332ef2-5b71-44df-a2c7-943b55357c2c" 00:09:59.750 ], 00:09:59.750 "product_name": "NVMe disk", 00:09:59.750 "block_size": 4096, 00:09:59.750 "num_blocks": 38912, 00:09:59.750 "uuid": "fe332ef2-5b71-44df-a2c7-943b55357c2c", 00:09:59.750 "assigned_rate_limits": { 00:09:59.750 "rw_ios_per_sec": 0, 00:09:59.750 "rw_mbytes_per_sec": 0, 00:09:59.750 "r_mbytes_per_sec": 0, 00:09:59.750 "w_mbytes_per_sec": 0 00:09:59.750 }, 00:09:59.750 "claimed": false, 00:09:59.750 "zoned": false, 00:09:59.750 "supported_io_types": { 00:09:59.750 "read": true, 00:09:59.750 "write": true, 00:09:59.750 "unmap": true, 00:09:59.750 "write_zeroes": true, 00:09:59.750 "flush": true, 00:09:59.750 "reset": true, 00:09:59.750 "compare": true, 00:09:59.750 "compare_and_write": true, 00:09:59.750 "abort": true, 00:09:59.750 "nvme_admin": true, 00:09:59.750 "nvme_io": true 00:09:59.750 }, 00:09:59.750 "driver_specific": { 00:09:59.750 "nvme": [ 00:09:59.750 { 00:09:59.750 "trid": { 00:09:59.750 "trtype": "TCP", 00:09:59.750 "adrfam": "IPv4", 00:09:59.750 "traddr": "10.0.0.2", 00:09:59.750 "trsvcid": "4420", 00:09:59.750 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:59.750 }, 00:09:59.750 "ctrlr_data": { 00:09:59.750 "cntlid": 1, 00:09:59.750 "vendor_id": "0x8086", 00:09:59.750 "model_number": "SPDK bdev Controller", 00:09:59.750 "serial_number": "SPDK0", 00:09:59.750 "firmware_revision": "24.01.1", 00:09:59.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:59.750 "oacs": { 00:09:59.750 "security": 0, 00:09:59.750 "format": 0, 00:09:59.750 "firmware": 0, 00:09:59.750 "ns_manage": 0 00:09:59.750 }, 00:09:59.750 "multi_ctrlr": true, 00:09:59.750 "ana_reporting": false 00:09:59.750 }, 00:09:59.750 "vs": { 00:09:59.750 "nvme_version": "1.3" 00:09:59.750 }, 00:09:59.750 "ns_data": { 00:09:59.750 "id": 1, 00:09:59.750 "can_share": true 00:09:59.750 } 00:09:59.750 } 00:09:59.750 ], 00:09:59.750 "mp_policy": "active_passive" 00:09:59.750 } 00:09:59.750 } 00:09:59.750 ] 00:09:59.750 04:13:12 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73422 00:09:59.750 04:13:12 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:59.750 04:13:12 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:00.009 Running I/O for 10 seconds... 00:10:00.945 Latency(us) 00:10:00.945 [2024-12-06T04:13:13.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.945 [2024-12-06T04:13:13.510Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.945 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:00.945 [2024-12-06T04:13:13.510Z] =================================================================================================================== 00:10:00.945 [2024-12-06T04:13:13.510Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:10:00.945 00:10:01.881 04:13:14 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:01.881 [2024-12-06T04:13:14.446Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.881 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:01.881 [2024-12-06T04:13:14.446Z] =================================================================================================================== 00:10:01.881 [2024-12-06T04:13:14.446Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:10:01.881 00:10:02.139 true 00:10:02.139 04:13:14 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:02.139 04:13:14 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:02.398 04:13:14 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:02.398 04:13:14 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:02.398 04:13:14 -- target/nvmf_lvs_grow.sh@65 -- # wait 73422 00:10:02.964 [2024-12-06T04:13:15.529Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.964 Nvme0n1 : 3.00 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:10:02.964 [2024-12-06T04:13:15.529Z] =================================================================================================================== 00:10:02.964 [2024-12-06T04:13:15.529Z] Total : 7027.33 27.45 0.00 0.00 0.00 0.00 0.00 00:10:02.964 00:10:03.896 [2024-12-06T04:13:16.461Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.896 Nvme0n1 : 4.00 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:10:03.896 [2024-12-06T04:13:16.461Z] =================================================================================================================== 00:10:03.896 [2024-12-06T04:13:16.461Z] Total : 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:10:03.896 00:10:05.308 [2024-12-06T04:13:17.873Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.308 Nvme0n1 : 5.00 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:10:05.308 [2024-12-06T04:13:17.873Z] =================================================================================================================== 00:10:05.308 [2024-12-06T04:13:17.873Z] Total : 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:10:05.308 00:10:05.874 [2024-12-06T04:13:18.439Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.874 Nvme0n1 : 6.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:10:05.874 [2024-12-06T04:13:18.439Z] =================================================================================================================== 00:10:05.874 [2024-12-06T04:13:18.439Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:10:05.874 00:10:07.250 [2024-12-06T04:13:19.815Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.250 Nvme0n1 : 7.00 6902.43 26.96 0.00 0.00 0.00 0.00 0.00 00:10:07.250 [2024-12-06T04:13:19.815Z] =================================================================================================================== 00:10:07.250 [2024-12-06T04:13:19.815Z] Total : 6902.43 26.96 0.00 0.00 0.00 0.00 0.00 00:10:07.250 00:10:08.187 [2024-12-06T04:13:20.752Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.187 Nvme0n1 : 8.00 6881.00 26.88 0.00 0.00 0.00 0.00 0.00 00:10:08.187 [2024-12-06T04:13:20.752Z] =================================================================================================================== 00:10:08.187 [2024-12-06T04:13:20.752Z] Total : 6881.00 26.88 0.00 0.00 0.00 0.00 0.00 00:10:08.187 00:10:09.123 [2024-12-06T04:13:21.688Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.123 Nvme0n1 : 9.00 6878.44 26.87 0.00 0.00 0.00 0.00 0.00 00:10:09.123 [2024-12-06T04:13:21.688Z] =================================================================================================================== 00:10:09.123 [2024-12-06T04:13:21.688Z] Total : 6878.44 26.87 0.00 0.00 0.00 0.00 0.00 00:10:09.123 00:10:10.059 [2024-12-06T04:13:22.624Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.059 Nvme0n1 : 10.00 6876.40 26.86 0.00 0.00 0.00 0.00 0.00 00:10:10.059 [2024-12-06T04:13:22.624Z] =================================================================================================================== 00:10:10.059 [2024-12-06T04:13:22.624Z] Total : 6876.40 26.86 0.00 0.00 0.00 0.00 0.00 00:10:10.059 00:10:10.059 00:10:10.059 Latency(us) 00:10:10.059 [2024-12-06T04:13:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.059 [2024-12-06T04:13:22.624Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.059 Nvme0n1 : 10.01 6885.62 26.90 0.00 0.00 18583.32 12511.42 166818.91 00:10:10.059 [2024-12-06T04:13:22.624Z] =================================================================================================================== 00:10:10.059 [2024-12-06T04:13:22.624Z] Total : 6885.62 26.90 0.00 0.00 18583.32 12511.42 166818.91 00:10:10.059 0 00:10:10.059 04:13:22 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73393 00:10:10.059 04:13:22 -- common/autotest_common.sh@936 -- # '[' -z 73393 ']' 00:10:10.059 04:13:22 -- common/autotest_common.sh@940 -- # kill -0 73393 00:10:10.059 04:13:22 -- common/autotest_common.sh@941 -- # uname 00:10:10.059 04:13:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:10.059 04:13:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73393 00:10:10.059 killing process with pid 73393 00:10:10.059 Received shutdown signal, test time was about 10.000000 seconds 00:10:10.059 00:10:10.059 Latency(us) 00:10:10.059 [2024-12-06T04:13:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.059 [2024-12-06T04:13:22.624Z] =================================================================================================================== 00:10:10.059 [2024-12-06T04:13:22.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:10.059 04:13:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:10.059 04:13:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:10.059 04:13:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73393' 00:10:10.059 04:13:22 -- common/autotest_common.sh@955 -- # kill 73393 00:10:10.059 04:13:22 -- common/autotest_common.sh@960 -- # wait 73393 00:10:10.317 04:13:22 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.576 04:13:22 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:10.576 04:13:22 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73041 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@74 -- # wait 73041 00:10:10.835 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73041 Killed "${NVMF_APP[@]}" "$@" 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@74 -- # true 00:10:10.835 04:13:23 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:10:10.835 04:13:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:10.835 04:13:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.835 04:13:23 -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 04:13:23 -- nvmf/common.sh@469 -- # nvmfpid=73548 00:10:10.835 04:13:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:10.835 04:13:23 -- nvmf/common.sh@470 -- # waitforlisten 73548 00:10:10.835 04:13:23 -- common/autotest_common.sh@829 -- # '[' -z 73548 ']' 00:10:10.835 04:13:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.835 04:13:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.835 04:13:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.835 04:13:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.835 04:13:23 -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 [2024-12-06 04:13:23.318457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:10.835 [2024-12-06 04:13:23.318566] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.094 [2024-12-06 04:13:23.462734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.094 [2024-12-06 04:13:23.552044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:11.094 [2024-12-06 04:13:23.552193] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.094 [2024-12-06 04:13:23.552206] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.094 [2024-12-06 04:13:23.552215] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.094 [2024-12-06 04:13:23.552240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.028 04:13:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.028 04:13:24 -- common/autotest_common.sh@862 -- # return 0 00:10:12.028 04:13:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:12.028 04:13:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:12.028 04:13:24 -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 04:13:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.028 04:13:24 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:12.285 [2024-12-06 04:13:24.620367] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:12.285 [2024-12-06 04:13:24.620752] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:12.285 [2024-12-06 04:13:24.620929] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:12.285 04:13:24 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:12.285 04:13:24 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev fe332ef2-5b71-44df-a2c7-943b55357c2c 00:10:12.285 04:13:24 -- common/autotest_common.sh@897 -- # local bdev_name=fe332ef2-5b71-44df-a2c7-943b55357c2c 00:10:12.285 04:13:24 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:12.285 04:13:24 -- common/autotest_common.sh@899 -- # local i 00:10:12.285 04:13:24 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:12.285 04:13:24 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:12.285 04:13:24 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:12.543 04:13:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe332ef2-5b71-44df-a2c7-943b55357c2c -t 2000 00:10:12.801 [ 00:10:12.801 { 00:10:12.801 "name": "fe332ef2-5b71-44df-a2c7-943b55357c2c", 00:10:12.801 "aliases": [ 00:10:12.801 "lvs/lvol" 00:10:12.801 ], 00:10:12.801 "product_name": "Logical Volume", 00:10:12.801 "block_size": 4096, 00:10:12.801 "num_blocks": 38912, 00:10:12.801 "uuid": "fe332ef2-5b71-44df-a2c7-943b55357c2c", 00:10:12.801 "assigned_rate_limits": { 00:10:12.801 "rw_ios_per_sec": 0, 00:10:12.801 "rw_mbytes_per_sec": 0, 00:10:12.801 "r_mbytes_per_sec": 0, 00:10:12.801 "w_mbytes_per_sec": 0 00:10:12.801 }, 00:10:12.801 "claimed": false, 00:10:12.801 "zoned": false, 00:10:12.801 "supported_io_types": { 00:10:12.801 "read": true, 00:10:12.801 "write": true, 00:10:12.801 "unmap": true, 00:10:12.801 "write_zeroes": true, 00:10:12.801 "flush": false, 00:10:12.801 "reset": true, 00:10:12.801 "compare": false, 00:10:12.801 "compare_and_write": false, 00:10:12.801 "abort": false, 00:10:12.801 "nvme_admin": false, 00:10:12.801 "nvme_io": false 00:10:12.801 }, 00:10:12.801 "driver_specific": { 00:10:12.801 "lvol": { 00:10:12.801 "lvol_store_uuid": "9419d7ca-d934-44d2-a17f-72b4af96adfa", 00:10:12.801 "base_bdev": "aio_bdev", 00:10:12.801 "thin_provision": false, 00:10:12.801 "snapshot": false, 00:10:12.801 "clone": false, 00:10:12.801 "esnap_clone": false 00:10:12.801 } 00:10:12.801 } 00:10:12.801 } 00:10:12.801 ] 00:10:12.801 04:13:25 -- common/autotest_common.sh@905 -- # return 0 00:10:12.801 04:13:25 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:12.801 04:13:25 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:13.060 04:13:25 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:13.060 04:13:25 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:13.060 04:13:25 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:13.318 04:13:25 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:13.318 04:13:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:13.576 [2024-12-06 04:13:26.038452] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:13.576 04:13:26 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:13.576 04:13:26 -- common/autotest_common.sh@650 -- # local es=0 00:10:13.576 04:13:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:13.576 04:13:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.576 04:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.576 04:13:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.576 04:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.576 04:13:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.576 04:13:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:13.576 04:13:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:13.576 04:13:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:13.576 04:13:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:13.834 request: 00:10:13.834 { 00:10:13.834 "uuid": "9419d7ca-d934-44d2-a17f-72b4af96adfa", 00:10:13.834 "method": "bdev_lvol_get_lvstores", 00:10:13.834 "req_id": 1 00:10:13.834 } 00:10:13.834 Got JSON-RPC error response 00:10:13.834 response: 00:10:13.834 { 00:10:13.834 "code": -19, 00:10:13.834 "message": "No such device" 00:10:13.834 } 00:10:13.834 04:13:26 -- common/autotest_common.sh@653 -- # es=1 00:10:13.834 04:13:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:13.834 04:13:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:13.834 04:13:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:13.834 04:13:26 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:14.096 aio_bdev 00:10:14.096 04:13:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev fe332ef2-5b71-44df-a2c7-943b55357c2c 00:10:14.096 04:13:26 -- common/autotest_common.sh@897 -- # local bdev_name=fe332ef2-5b71-44df-a2c7-943b55357c2c 00:10:14.096 04:13:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:14.096 04:13:26 -- common/autotest_common.sh@899 -- # local i 00:10:14.096 04:13:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:14.096 04:13:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:14.096 04:13:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:14.362 04:13:26 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe332ef2-5b71-44df-a2c7-943b55357c2c -t 2000 00:10:14.620 [ 00:10:14.620 { 00:10:14.620 "name": "fe332ef2-5b71-44df-a2c7-943b55357c2c", 00:10:14.620 "aliases": [ 00:10:14.620 "lvs/lvol" 00:10:14.620 ], 00:10:14.620 "product_name": "Logical Volume", 00:10:14.620 "block_size": 4096, 00:10:14.620 "num_blocks": 38912, 00:10:14.620 "uuid": "fe332ef2-5b71-44df-a2c7-943b55357c2c", 00:10:14.620 "assigned_rate_limits": { 00:10:14.620 "rw_ios_per_sec": 0, 00:10:14.620 "rw_mbytes_per_sec": 0, 00:10:14.620 "r_mbytes_per_sec": 0, 00:10:14.620 "w_mbytes_per_sec": 0 00:10:14.620 }, 00:10:14.620 "claimed": false, 00:10:14.620 "zoned": false, 00:10:14.620 "supported_io_types": { 00:10:14.620 "read": true, 00:10:14.620 "write": true, 00:10:14.620 "unmap": true, 00:10:14.620 "write_zeroes": true, 00:10:14.620 "flush": false, 00:10:14.620 "reset": true, 00:10:14.620 "compare": false, 00:10:14.620 "compare_and_write": false, 00:10:14.620 "abort": false, 00:10:14.620 "nvme_admin": false, 00:10:14.620 "nvme_io": false 00:10:14.620 }, 00:10:14.620 "driver_specific": { 00:10:14.620 "lvol": { 00:10:14.620 "lvol_store_uuid": "9419d7ca-d934-44d2-a17f-72b4af96adfa", 00:10:14.620 "base_bdev": "aio_bdev", 00:10:14.620 "thin_provision": false, 00:10:14.620 "snapshot": false, 00:10:14.620 "clone": false, 00:10:14.620 "esnap_clone": false 00:10:14.620 } 00:10:14.620 } 00:10:14.620 } 00:10:14.620 ] 00:10:14.620 04:13:27 -- common/autotest_common.sh@905 -- # return 0 00:10:14.620 04:13:27 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:14.620 04:13:27 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:14.878 04:13:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:14.878 04:13:27 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:14.878 04:13:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:15.137 04:13:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:15.137 04:13:27 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe332ef2-5b71-44df-a2c7-943b55357c2c 00:10:15.393 04:13:27 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9419d7ca-d934-44d2-a17f-72b4af96adfa 00:10:15.650 04:13:28 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:15.908 04:13:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:16.475 00:10:16.475 real 0m20.660s 00:10:16.475 user 0m42.692s 00:10:16.475 sys 0m8.189s 00:10:16.475 04:13:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.475 ************************************ 00:10:16.475 END TEST lvs_grow_dirty 00:10:16.475 ************************************ 00:10:16.475 04:13:28 -- common/autotest_common.sh@10 -- # set +x 00:10:16.475 04:13:28 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:16.475 04:13:28 -- common/autotest_common.sh@806 -- # type=--id 00:10:16.475 04:13:28 -- common/autotest_common.sh@807 -- # id=0 00:10:16.475 04:13:28 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:16.475 04:13:28 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:16.475 04:13:28 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:16.475 04:13:28 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:16.475 04:13:28 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:16.475 04:13:28 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:16.475 nvmf_trace.0 00:10:16.475 04:13:28 -- common/autotest_common.sh@821 -- # return 0 00:10:16.475 04:13:28 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:16.475 04:13:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:16.475 04:13:28 -- nvmf/common.sh@116 -- # sync 00:10:16.735 04:13:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:16.735 04:13:29 -- nvmf/common.sh@119 -- # set +e 00:10:16.735 04:13:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:16.735 04:13:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:16.735 rmmod nvme_tcp 00:10:16.735 rmmod nvme_fabrics 00:10:16.735 rmmod nvme_keyring 00:10:16.735 04:13:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:16.735 04:13:29 -- nvmf/common.sh@123 -- # set -e 00:10:16.735 04:13:29 -- nvmf/common.sh@124 -- # return 0 00:10:16.735 04:13:29 -- nvmf/common.sh@477 -- # '[' -n 73548 ']' 00:10:16.735 04:13:29 -- nvmf/common.sh@478 -- # killprocess 73548 00:10:16.735 04:13:29 -- common/autotest_common.sh@936 -- # '[' -z 73548 ']' 00:10:16.735 04:13:29 -- common/autotest_common.sh@940 -- # kill -0 73548 00:10:16.735 04:13:29 -- common/autotest_common.sh@941 -- # uname 00:10:16.735 04:13:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:16.735 04:13:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73548 00:10:16.735 04:13:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:16.735 killing process with pid 73548 00:10:16.735 04:13:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:16.735 04:13:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73548' 00:10:16.735 04:13:29 -- common/autotest_common.sh@955 -- # kill 73548 00:10:16.735 04:13:29 -- common/autotest_common.sh@960 -- # wait 73548 00:10:16.995 04:13:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:16.995 04:13:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:16.995 04:13:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:16.995 04:13:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.995 04:13:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:16.995 04:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.995 04:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.995 04:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.995 04:13:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:16.995 00:10:16.995 real 0m41.359s 00:10:16.995 user 1m6.430s 00:10:16.995 sys 0m11.677s 00:10:16.995 04:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.995 ************************************ 00:10:16.995 END TEST nvmf_lvs_grow 00:10:16.995 ************************************ 00:10:16.995 04:13:29 -- common/autotest_common.sh@10 -- # set +x 00:10:16.995 04:13:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:16.995 04:13:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:16.995 04:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.995 04:13:29 -- common/autotest_common.sh@10 -- # set +x 00:10:16.995 ************************************ 00:10:16.995 START TEST nvmf_bdev_io_wait 00:10:16.995 ************************************ 00:10:16.995 04:13:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:17.255 * Looking for test storage... 00:10:17.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.255 04:13:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:17.255 04:13:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:17.255 04:13:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:17.255 04:13:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:17.255 04:13:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:17.255 04:13:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:17.255 04:13:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:17.255 04:13:29 -- scripts/common.sh@335 -- # IFS=.-: 00:10:17.255 04:13:29 -- scripts/common.sh@335 -- # read -ra ver1 00:10:17.255 04:13:29 -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.255 04:13:29 -- scripts/common.sh@336 -- # read -ra ver2 00:10:17.255 04:13:29 -- scripts/common.sh@337 -- # local 'op=<' 00:10:17.255 04:13:29 -- scripts/common.sh@339 -- # ver1_l=2 00:10:17.255 04:13:29 -- scripts/common.sh@340 -- # ver2_l=1 00:10:17.255 04:13:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:17.255 04:13:29 -- scripts/common.sh@343 -- # case "$op" in 00:10:17.255 04:13:29 -- scripts/common.sh@344 -- # : 1 00:10:17.255 04:13:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:17.255 04:13:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.255 04:13:29 -- scripts/common.sh@364 -- # decimal 1 00:10:17.255 04:13:29 -- scripts/common.sh@352 -- # local d=1 00:10:17.255 04:13:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.255 04:13:29 -- scripts/common.sh@354 -- # echo 1 00:10:17.255 04:13:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:17.255 04:13:29 -- scripts/common.sh@365 -- # decimal 2 00:10:17.255 04:13:29 -- scripts/common.sh@352 -- # local d=2 00:10:17.255 04:13:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.255 04:13:29 -- scripts/common.sh@354 -- # echo 2 00:10:17.255 04:13:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:17.255 04:13:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:17.255 04:13:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:17.255 04:13:29 -- scripts/common.sh@367 -- # return 0 00:10:17.255 04:13:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.255 04:13:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.255 --rc genhtml_branch_coverage=1 00:10:17.255 --rc genhtml_function_coverage=1 00:10:17.255 --rc genhtml_legend=1 00:10:17.255 --rc geninfo_all_blocks=1 00:10:17.255 --rc geninfo_unexecuted_blocks=1 00:10:17.255 00:10:17.255 ' 00:10:17.255 04:13:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.255 --rc genhtml_branch_coverage=1 00:10:17.255 --rc genhtml_function_coverage=1 00:10:17.255 --rc genhtml_legend=1 00:10:17.255 --rc geninfo_all_blocks=1 00:10:17.255 --rc geninfo_unexecuted_blocks=1 00:10:17.255 00:10:17.255 ' 00:10:17.255 04:13:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.255 --rc genhtml_branch_coverage=1 00:10:17.255 --rc genhtml_function_coverage=1 00:10:17.255 --rc genhtml_legend=1 00:10:17.255 --rc geninfo_all_blocks=1 00:10:17.255 --rc geninfo_unexecuted_blocks=1 00:10:17.255 00:10:17.255 ' 00:10:17.255 04:13:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.255 --rc genhtml_branch_coverage=1 00:10:17.255 --rc genhtml_function_coverage=1 00:10:17.255 --rc genhtml_legend=1 00:10:17.255 --rc geninfo_all_blocks=1 00:10:17.255 --rc geninfo_unexecuted_blocks=1 00:10:17.255 00:10:17.255 ' 00:10:17.255 04:13:29 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.255 04:13:29 -- nvmf/common.sh@7 -- # uname -s 00:10:17.255 04:13:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.255 04:13:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.255 04:13:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.255 04:13:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.255 04:13:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.255 04:13:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.255 04:13:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.255 04:13:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.255 04:13:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.255 04:13:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.255 04:13:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:17.255 04:13:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:17.255 04:13:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.255 04:13:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.255 04:13:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.255 04:13:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.255 04:13:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.255 04:13:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.255 04:13:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.255 04:13:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.255 04:13:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.255 04:13:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.255 04:13:29 -- paths/export.sh@5 -- # export PATH 00:10:17.256 04:13:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.256 04:13:29 -- nvmf/common.sh@46 -- # : 0 00:10:17.256 04:13:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:17.256 04:13:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:17.256 04:13:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:17.256 04:13:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.256 04:13:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.256 04:13:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:17.256 04:13:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:17.256 04:13:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:17.256 04:13:29 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.256 04:13:29 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.256 04:13:29 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:17.256 04:13:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:17.256 04:13:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.256 04:13:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:17.256 04:13:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:17.256 04:13:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:17.256 04:13:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.256 04:13:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.256 04:13:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.256 04:13:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:17.256 04:13:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:17.256 04:13:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:17.256 04:13:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:17.256 04:13:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:17.256 04:13:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:17.256 04:13:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.256 04:13:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.256 04:13:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:17.256 04:13:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:17.256 04:13:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.256 04:13:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.256 04:13:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.256 04:13:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.256 04:13:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.256 04:13:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.256 04:13:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.256 04:13:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.256 04:13:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:17.256 04:13:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:17.256 Cannot find device "nvmf_tgt_br" 00:10:17.256 04:13:29 -- nvmf/common.sh@154 -- # true 00:10:17.256 04:13:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.256 Cannot find device "nvmf_tgt_br2" 00:10:17.256 04:13:29 -- nvmf/common.sh@155 -- # true 00:10:17.256 04:13:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:17.256 04:13:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:17.256 Cannot find device "nvmf_tgt_br" 00:10:17.256 04:13:29 -- nvmf/common.sh@157 -- # true 00:10:17.256 04:13:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:17.256 Cannot find device "nvmf_tgt_br2" 00:10:17.256 04:13:29 -- nvmf/common.sh@158 -- # true 00:10:17.256 04:13:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:17.515 04:13:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:17.515 04:13:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.515 04:13:29 -- nvmf/common.sh@161 -- # true 00:10:17.515 04:13:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:17.515 04:13:29 -- nvmf/common.sh@162 -- # true 00:10:17.515 04:13:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:17.515 04:13:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:17.515 04:13:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:17.515 04:13:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:17.515 04:13:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:17.515 04:13:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:17.515 04:13:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:17.515 04:13:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:17.515 04:13:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:17.515 04:13:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:17.515 04:13:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:17.515 04:13:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:17.515 04:13:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:17.515 04:13:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:17.515 04:13:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:17.515 04:13:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:17.515 04:13:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:17.515 04:13:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:17.515 04:13:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:17.515 04:13:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:17.515 04:13:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:17.515 04:13:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:17.515 04:13:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:17.515 04:13:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:17.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:10:17.515 00:10:17.515 --- 10.0.0.2 ping statistics --- 00:10:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.515 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:17.515 04:13:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:17.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:17.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:17.515 00:10:17.515 --- 10.0.0.3 ping statistics --- 00:10:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.515 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:17.515 04:13:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:17.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:17.515 00:10:17.515 --- 10.0.0.1 ping statistics --- 00:10:17.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.515 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:17.515 04:13:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.515 04:13:30 -- nvmf/common.sh@421 -- # return 0 00:10:17.515 04:13:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:17.515 04:13:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.515 04:13:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:17.516 04:13:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:17.516 04:13:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.516 04:13:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:17.516 04:13:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:17.775 04:13:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:17.775 04:13:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:17.775 04:13:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.775 04:13:30 -- common/autotest_common.sh@10 -- # set +x 00:10:17.775 04:13:30 -- nvmf/common.sh@469 -- # nvmfpid=73874 00:10:17.775 04:13:30 -- nvmf/common.sh@470 -- # waitforlisten 73874 00:10:17.775 04:13:30 -- common/autotest_common.sh@829 -- # '[' -z 73874 ']' 00:10:17.775 04:13:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.775 04:13:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:17.775 04:13:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.775 04:13:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.775 04:13:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.775 04:13:30 -- common/autotest_common.sh@10 -- # set +x 00:10:17.775 [2024-12-06 04:13:30.152378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:17.775 [2024-12-06 04:13:30.152524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.775 [2024-12-06 04:13:30.299483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.033 [2024-12-06 04:13:30.428898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.033 [2024-12-06 04:13:30.429116] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.033 [2024-12-06 04:13:30.429134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.033 [2024-12-06 04:13:30.429147] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.033 [2024-12-06 04:13:30.429351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.033 [2024-12-06 04:13:30.429484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.033 [2024-12-06 04:13:30.430275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.033 [2024-12-06 04:13:30.430315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.967 04:13:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.967 04:13:31 -- common/autotest_common.sh@862 -- # return 0 00:10:18.967 04:13:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:18.967 04:13:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 04:13:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 [2024-12-06 04:13:31.349273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 Malloc0 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.967 04:13:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.967 04:13:31 -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 [2024-12-06 04:13:31.423721] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.967 04:13:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73913 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=73915 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # config=() 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # local subsystem config 00:10:18.967 04:13:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73917 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:18.967 { 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme$subsystem", 00:10:18.967 "trtype": "$TEST_TRANSPORT", 00:10:18.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.967 "adrfam": "ipv4", 00:10:18.967 "trsvcid": "$NVMF_PORT", 00:10:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.967 "hdgst": ${hdgst:-false}, 00:10:18.967 "ddgst": ${ddgst:-false} 00:10:18.967 }, 00:10:18.967 "method": "bdev_nvme_attach_controller" 00:10:18.967 } 00:10:18.967 EOF 00:10:18.967 )") 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # cat 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # config=() 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # local subsystem config 00:10:18.967 04:13:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:18.967 { 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme$subsystem", 00:10:18.967 "trtype": "$TEST_TRANSPORT", 00:10:18.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.967 "adrfam": "ipv4", 00:10:18.967 "trsvcid": "$NVMF_PORT", 00:10:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.967 "hdgst": ${hdgst:-false}, 00:10:18.967 "ddgst": ${ddgst:-false} 00:10:18.967 }, 00:10:18.967 "method": "bdev_nvme_attach_controller" 00:10:18.967 } 00:10:18.967 EOF 00:10:18.967 )") 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # config=() 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # local subsystem config 00:10:18.967 04:13:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:18.967 { 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme$subsystem", 00:10:18.967 "trtype": "$TEST_TRANSPORT", 00:10:18.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.967 "adrfam": "ipv4", 00:10:18.967 "trsvcid": "$NVMF_PORT", 00:10:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.967 "hdgst": ${hdgst:-false}, 00:10:18.967 "ddgst": ${ddgst:-false} 00:10:18.967 }, 00:10:18.967 "method": "bdev_nvme_attach_controller" 00:10:18.967 } 00:10:18.967 EOF 00:10:18.967 )") 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73922 00:10:18.967 04:13:31 -- nvmf/common.sh@544 -- # jq . 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # cat 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@35 -- # sync 00:10:18.967 04:13:31 -- nvmf/common.sh@545 -- # IFS=, 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # cat 00:10:18.967 04:13:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme1", 00:10:18.967 "trtype": "tcp", 00:10:18.967 "traddr": "10.0.0.2", 00:10:18.967 "adrfam": "ipv4", 00:10:18.967 "trsvcid": "4420", 00:10:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.967 "hdgst": false, 00:10:18.967 "ddgst": false 00:10:18.967 }, 00:10:18.967 "method": "bdev_nvme_attach_controller" 00:10:18.967 }' 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:18.967 04:13:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # config=() 00:10:18.967 04:13:31 -- nvmf/common.sh@520 -- # local subsystem config 00:10:18.967 04:13:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:18.967 { 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme$subsystem", 00:10:18.967 "trtype": "$TEST_TRANSPORT", 00:10:18.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.967 "adrfam": "ipv4", 00:10:18.967 "trsvcid": "$NVMF_PORT", 00:10:18.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.967 "hdgst": ${hdgst:-false}, 00:10:18.967 "ddgst": ${ddgst:-false} 00:10:18.967 }, 00:10:18.967 "method": "bdev_nvme_attach_controller" 00:10:18.967 } 00:10:18.967 EOF 00:10:18.967 )") 00:10:18.967 04:13:31 -- nvmf/common.sh@542 -- # cat 00:10:18.967 04:13:31 -- nvmf/common.sh@544 -- # jq . 00:10:18.967 04:13:31 -- nvmf/common.sh@545 -- # IFS=, 00:10:18.967 04:13:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:18.967 "params": { 00:10:18.967 "name": "Nvme1", 00:10:18.967 "trtype": "tcp", 00:10:18.967 "traddr": "10.0.0.2", 00:10:18.967 "adrfam": "ipv4", 00:10:18.968 "trsvcid": "4420", 00:10:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.968 "hdgst": false, 00:10:18.968 "ddgst": false 00:10:18.968 }, 00:10:18.968 "method": "bdev_nvme_attach_controller" 00:10:18.968 }' 00:10:18.968 04:13:31 -- nvmf/common.sh@544 -- # jq . 00:10:18.968 04:13:31 -- nvmf/common.sh@544 -- # jq . 00:10:18.968 04:13:31 -- nvmf/common.sh@545 -- # IFS=, 00:10:18.968 04:13:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:18.968 "params": { 00:10:18.968 "name": "Nvme1", 00:10:18.968 "trtype": "tcp", 00:10:18.968 "traddr": "10.0.0.2", 00:10:18.968 "adrfam": "ipv4", 00:10:18.968 "trsvcid": "4420", 00:10:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.968 "hdgst": false, 00:10:18.968 "ddgst": false 00:10:18.968 }, 00:10:18.968 "method": "bdev_nvme_attach_controller" 00:10:18.968 }' 00:10:18.968 04:13:31 -- nvmf/common.sh@545 -- # IFS=, 00:10:18.968 04:13:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:18.968 "params": { 00:10:18.968 "name": "Nvme1", 00:10:18.968 "trtype": "tcp", 00:10:18.968 "traddr": "10.0.0.2", 00:10:18.968 "adrfam": "ipv4", 00:10:18.968 "trsvcid": "4420", 00:10:18.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.968 "hdgst": false, 00:10:18.968 "ddgst": false 00:10:18.968 }, 00:10:18.968 "method": "bdev_nvme_attach_controller" 00:10:18.968 }' 00:10:18.968 [2024-12-06 04:13:31.486139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.968 [2024-12-06 04:13:31.486379] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:18.968 04:13:31 -- target/bdev_io_wait.sh@37 -- # wait 73913 00:10:18.968 [2024-12-06 04:13:31.504552] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.968 [2024-12-06 04:13:31.504618] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:18.968 [2024-12-06 04:13:31.509098] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.968 [2024-12-06 04:13:31.509165] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:18.968 [2024-12-06 04:13:31.512026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.968 [2024-12-06 04:13:31.512139] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:19.227 [2024-12-06 04:13:31.691776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.227 [2024-12-06 04:13:31.760497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:19.227 [2024-12-06 04:13:31.764488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.485 [2024-12-06 04:13:31.832006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.485 [2024-12-06 04:13:31.836518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.485 [2024-12-06 04:13:31.898760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.485 [2024-12-06 04:13:31.900624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:19.485 Running I/O for 1 seconds... 00:10:19.485 [2024-12-06 04:13:31.966540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:19.485 Running I/O for 1 seconds... 00:10:19.744 Running I/O for 1 seconds... 00:10:19.744 Running I/O for 1 seconds... 00:10:20.680 00:10:20.680 Latency(us) 00:10:20.680 [2024-12-06T04:13:33.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.680 [2024-12-06T04:13:33.245Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:20.680 Nvme1n1 : 1.01 6410.20 25.04 0.00 0.00 19832.55 10843.23 23354.65 00:10:20.680 [2024-12-06T04:13:33.245Z] =================================================================================================================== 00:10:20.680 [2024-12-06T04:13:33.245Z] Total : 6410.20 25.04 0.00 0.00 19832.55 10843.23 23354.65 00:10:20.680 00:10:20.680 Latency(us) 00:10:20.680 [2024-12-06T04:13:33.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.680 [2024-12-06T04:13:33.245Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:20.680 Nvme1n1 : 1.01 6788.57 26.52 0.00 0.00 18759.09 9472.93 33602.09 00:10:20.680 [2024-12-06T04:13:33.245Z] =================================================================================================================== 00:10:20.680 [2024-12-06T04:13:33.245Z] Total : 6788.57 26.52 0.00 0.00 18759.09 9472.93 33602.09 00:10:20.680 00:10:20.680 Latency(us) 00:10:20.680 [2024-12-06T04:13:33.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.680 [2024-12-06T04:13:33.245Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:20.680 Nvme1n1 : 1.01 7700.93 30.08 0.00 0.00 16551.76 7477.06 25380.31 00:10:20.680 [2024-12-06T04:13:33.245Z] =================================================================================================================== 00:10:20.680 [2024-12-06T04:13:33.245Z] Total : 7700.93 30.08 0.00 0.00 16551.76 7477.06 25380.31 00:10:20.680 00:10:20.680 Latency(us) 00:10:20.680 [2024-12-06T04:13:33.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.680 [2024-12-06T04:13:33.245Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:20.680 Nvme1n1 : 1.00 175822.77 686.81 0.00 0.00 725.42 327.68 1370.30 00:10:20.680 [2024-12-06T04:13:33.245Z] =================================================================================================================== 00:10:20.680 [2024-12-06T04:13:33.245Z] Total : 175822.77 686.81 0.00 0.00 725.42 327.68 1370.30 00:10:20.680 04:13:33 -- target/bdev_io_wait.sh@38 -- # wait 73915 00:10:20.680 04:13:33 -- target/bdev_io_wait.sh@39 -- # wait 73917 00:10:20.939 04:13:33 -- target/bdev_io_wait.sh@40 -- # wait 73922 00:10:20.939 04:13:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.939 04:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.939 04:13:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.939 04:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.939 04:13:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:20.939 04:13:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:20.939 04:13:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:20.939 04:13:33 -- nvmf/common.sh@116 -- # sync 00:10:20.939 04:13:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:20.939 04:13:33 -- nvmf/common.sh@119 -- # set +e 00:10:20.939 04:13:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:20.939 04:13:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:20.939 rmmod nvme_tcp 00:10:20.939 rmmod nvme_fabrics 00:10:20.939 rmmod nvme_keyring 00:10:20.939 04:13:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:20.939 04:13:33 -- nvmf/common.sh@123 -- # set -e 00:10:20.939 04:13:33 -- nvmf/common.sh@124 -- # return 0 00:10:20.939 04:13:33 -- nvmf/common.sh@477 -- # '[' -n 73874 ']' 00:10:20.939 04:13:33 -- nvmf/common.sh@478 -- # killprocess 73874 00:10:20.939 04:13:33 -- common/autotest_common.sh@936 -- # '[' -z 73874 ']' 00:10:20.939 04:13:33 -- common/autotest_common.sh@940 -- # kill -0 73874 00:10:20.939 04:13:33 -- common/autotest_common.sh@941 -- # uname 00:10:20.939 04:13:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:20.939 04:13:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73874 00:10:20.939 04:13:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:20.939 04:13:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:20.939 killing process with pid 73874 00:10:20.940 04:13:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73874' 00:10:20.940 04:13:33 -- common/autotest_common.sh@955 -- # kill 73874 00:10:20.940 04:13:33 -- common/autotest_common.sh@960 -- # wait 73874 00:10:21.198 04:13:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:21.198 04:13:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:21.198 04:13:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:21.198 04:13:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.198 04:13:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:21.198 04:13:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.198 04:13:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.198 04:13:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.458 04:13:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:21.458 00:10:21.458 real 0m4.243s 00:10:21.458 user 0m17.773s 00:10:21.458 sys 0m2.273s 00:10:21.458 04:13:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:21.458 04:13:33 -- common/autotest_common.sh@10 -- # set +x 00:10:21.458 ************************************ 00:10:21.458 END TEST nvmf_bdev_io_wait 00:10:21.458 ************************************ 00:10:21.458 04:13:33 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:21.458 04:13:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:21.458 04:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.458 04:13:33 -- common/autotest_common.sh@10 -- # set +x 00:10:21.458 ************************************ 00:10:21.458 START TEST nvmf_queue_depth 00:10:21.458 ************************************ 00:10:21.458 04:13:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:21.458 * Looking for test storage... 00:10:21.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.458 04:13:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:21.458 04:13:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:21.458 04:13:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:21.458 04:13:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:21.458 04:13:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:21.458 04:13:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:21.458 04:13:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:21.458 04:13:33 -- scripts/common.sh@335 -- # IFS=.-: 00:10:21.458 04:13:33 -- scripts/common.sh@335 -- # read -ra ver1 00:10:21.459 04:13:33 -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.459 04:13:33 -- scripts/common.sh@336 -- # read -ra ver2 00:10:21.459 04:13:33 -- scripts/common.sh@337 -- # local 'op=<' 00:10:21.459 04:13:33 -- scripts/common.sh@339 -- # ver1_l=2 00:10:21.459 04:13:33 -- scripts/common.sh@340 -- # ver2_l=1 00:10:21.459 04:13:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:21.459 04:13:33 -- scripts/common.sh@343 -- # case "$op" in 00:10:21.459 04:13:33 -- scripts/common.sh@344 -- # : 1 00:10:21.459 04:13:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:21.459 04:13:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.459 04:13:33 -- scripts/common.sh@364 -- # decimal 1 00:10:21.459 04:13:33 -- scripts/common.sh@352 -- # local d=1 00:10:21.459 04:13:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.459 04:13:33 -- scripts/common.sh@354 -- # echo 1 00:10:21.459 04:13:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:21.459 04:13:33 -- scripts/common.sh@365 -- # decimal 2 00:10:21.459 04:13:33 -- scripts/common.sh@352 -- # local d=2 00:10:21.459 04:13:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.459 04:13:33 -- scripts/common.sh@354 -- # echo 2 00:10:21.459 04:13:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:21.459 04:13:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:21.459 04:13:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:21.459 04:13:33 -- scripts/common.sh@367 -- # return 0 00:10:21.459 04:13:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.459 04:13:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.459 --rc genhtml_branch_coverage=1 00:10:21.459 --rc genhtml_function_coverage=1 00:10:21.459 --rc genhtml_legend=1 00:10:21.459 --rc geninfo_all_blocks=1 00:10:21.459 --rc geninfo_unexecuted_blocks=1 00:10:21.459 00:10:21.459 ' 00:10:21.459 04:13:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.459 --rc genhtml_branch_coverage=1 00:10:21.459 --rc genhtml_function_coverage=1 00:10:21.459 --rc genhtml_legend=1 00:10:21.459 --rc geninfo_all_blocks=1 00:10:21.459 --rc geninfo_unexecuted_blocks=1 00:10:21.459 00:10:21.459 ' 00:10:21.459 04:13:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.459 --rc genhtml_branch_coverage=1 00:10:21.459 --rc genhtml_function_coverage=1 00:10:21.459 --rc genhtml_legend=1 00:10:21.459 --rc geninfo_all_blocks=1 00:10:21.459 --rc geninfo_unexecuted_blocks=1 00:10:21.459 00:10:21.459 ' 00:10:21.459 04:13:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:21.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.459 --rc genhtml_branch_coverage=1 00:10:21.459 --rc genhtml_function_coverage=1 00:10:21.459 --rc genhtml_legend=1 00:10:21.459 --rc geninfo_all_blocks=1 00:10:21.459 --rc geninfo_unexecuted_blocks=1 00:10:21.459 00:10:21.459 ' 00:10:21.459 04:13:33 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.459 04:13:33 -- nvmf/common.sh@7 -- # uname -s 00:10:21.459 04:13:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.459 04:13:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.459 04:13:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.459 04:13:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.459 04:13:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.459 04:13:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.459 04:13:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.459 04:13:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.459 04:13:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.459 04:13:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.459 04:13:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:21.459 04:13:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:21.459 04:13:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.459 04:13:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.459 04:13:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.459 04:13:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.459 04:13:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.459 04:13:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.459 04:13:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.459 04:13:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.459 04:13:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.459 04:13:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.459 04:13:34 -- paths/export.sh@5 -- # export PATH 00:10:21.459 04:13:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.459 04:13:34 -- nvmf/common.sh@46 -- # : 0 00:10:21.459 04:13:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:21.459 04:13:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:21.459 04:13:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:21.459 04:13:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.459 04:13:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.459 04:13:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:21.459 04:13:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:21.459 04:13:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:21.459 04:13:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:21.459 04:13:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:21.459 04:13:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:21.459 04:13:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:21.459 04:13:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:21.459 04:13:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.459 04:13:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:21.459 04:13:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:21.459 04:13:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:21.459 04:13:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.459 04:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.459 04:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.459 04:13:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:21.459 04:13:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:21.459 04:13:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:21.459 04:13:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:21.459 04:13:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:21.459 04:13:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:21.459 04:13:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.459 04:13:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.459 04:13:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.459 04:13:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:21.459 04:13:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.459 04:13:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.459 04:13:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.459 04:13:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.459 04:13:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.459 04:13:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.459 04:13:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.459 04:13:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.459 04:13:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:21.718 04:13:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:21.718 Cannot find device "nvmf_tgt_br" 00:10:21.718 04:13:34 -- nvmf/common.sh@154 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.718 Cannot find device "nvmf_tgt_br2" 00:10:21.718 04:13:34 -- nvmf/common.sh@155 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:21.718 04:13:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:21.718 Cannot find device "nvmf_tgt_br" 00:10:21.718 04:13:34 -- nvmf/common.sh@157 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:21.718 Cannot find device "nvmf_tgt_br2" 00:10:21.718 04:13:34 -- nvmf/common.sh@158 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:21.718 04:13:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:21.718 04:13:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.718 04:13:34 -- nvmf/common.sh@161 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.718 04:13:34 -- nvmf/common.sh@162 -- # true 00:10:21.718 04:13:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.718 04:13:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.718 04:13:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.718 04:13:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.718 04:13:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.718 04:13:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.718 04:13:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.718 04:13:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.718 04:13:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.718 04:13:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:21.718 04:13:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:21.718 04:13:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:21.718 04:13:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:21.718 04:13:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.718 04:13:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.718 04:13:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.718 04:13:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:21.718 04:13:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:21.718 04:13:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.718 04:13:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.981 04:13:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.981 04:13:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.981 04:13:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.981 04:13:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:21.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:21.981 00:10:21.981 --- 10.0.0.2 ping statistics --- 00:10:21.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.981 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:21.981 04:13:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:21.981 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.981 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:10:21.981 00:10:21.981 --- 10.0.0.3 ping statistics --- 00:10:21.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.981 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:21.981 04:13:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:10:21.981 00:10:21.981 --- 10.0.0.1 ping statistics --- 00:10:21.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.981 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:21.981 04:13:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.981 04:13:34 -- nvmf/common.sh@421 -- # return 0 00:10:21.981 04:13:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:21.981 04:13:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.981 04:13:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:21.981 04:13:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:21.981 04:13:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.981 04:13:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:21.981 04:13:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:21.981 04:13:34 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:21.981 04:13:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:21.981 04:13:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.981 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 04:13:34 -- nvmf/common.sh@469 -- # nvmfpid=74155 00:10:21.981 04:13:34 -- nvmf/common.sh@470 -- # waitforlisten 74155 00:10:21.981 04:13:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:21.981 04:13:34 -- common/autotest_common.sh@829 -- # '[' -z 74155 ']' 00:10:21.981 04:13:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.981 04:13:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.981 04:13:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.981 04:13:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.981 04:13:34 -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 [2024-12-06 04:13:34.395257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.981 [2024-12-06 04:13:34.395911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.981 [2024-12-06 04:13:34.540442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.251 [2024-12-06 04:13:34.621652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.251 [2024-12-06 04:13:34.621810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.251 [2024-12-06 04:13:34.621822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.251 [2024-12-06 04:13:34.621831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.251 [2024-12-06 04:13:34.621855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.818 04:13:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.818 04:13:35 -- common/autotest_common.sh@862 -- # return 0 00:10:22.818 04:13:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:22.818 04:13:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.818 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 04:13:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.075 04:13:35 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.075 04:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 [2024-12-06 04:13:35.414211] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.075 04:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.075 04:13:35 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.075 04:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 Malloc0 00:10:23.075 04:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.075 04:13:35 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.075 04:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 04:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.075 04:13:35 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.075 04:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 04:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.075 04:13:35 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.075 04:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 [2024-12-06 04:13:35.478343] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.075 04:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.075 04:13:35 -- target/queue_depth.sh@30 -- # bdevperf_pid=74187 00:10:23.075 04:13:35 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:23.075 04:13:35 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:23.075 04:13:35 -- target/queue_depth.sh@33 -- # waitforlisten 74187 /var/tmp/bdevperf.sock 00:10:23.075 04:13:35 -- common/autotest_common.sh@829 -- # '[' -z 74187 ']' 00:10:23.075 04:13:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:23.075 04:13:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:23.075 04:13:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:23.075 04:13:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.075 04:13:35 -- common/autotest_common.sh@10 -- # set +x 00:10:23.075 [2024-12-06 04:13:35.540368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:23.075 [2024-12-06 04:13:35.540520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74187 ] 00:10:23.333 [2024-12-06 04:13:35.689278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.333 [2024-12-06 04:13:35.819260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.267 04:13:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.267 04:13:36 -- common/autotest_common.sh@862 -- # return 0 00:10:24.267 04:13:36 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:24.267 04:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.267 04:13:36 -- common/autotest_common.sh@10 -- # set +x 00:10:24.267 NVMe0n1 00:10:24.267 04:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.267 04:13:36 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:24.267 Running I/O for 10 seconds... 00:10:36.473 00:10:36.473 Latency(us) 00:10:36.473 [2024-12-06T04:13:49.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.473 [2024-12-06T04:13:49.038Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:36.473 Verification LBA range: start 0x0 length 0x4000 00:10:36.473 NVMe0n1 : 10.08 13170.42 51.45 0.00 0.00 77404.70 17277.67 61008.06 00:10:36.473 [2024-12-06T04:13:49.038Z] =================================================================================================================== 00:10:36.473 [2024-12-06T04:13:49.038Z] Total : 13170.42 51.45 0.00 0.00 77404.70 17277.67 61008.06 00:10:36.473 0 00:10:36.473 04:13:46 -- target/queue_depth.sh@39 -- # killprocess 74187 00:10:36.473 04:13:46 -- common/autotest_common.sh@936 -- # '[' -z 74187 ']' 00:10:36.473 04:13:46 -- common/autotest_common.sh@940 -- # kill -0 74187 00:10:36.473 04:13:46 -- common/autotest_common.sh@941 -- # uname 00:10:36.473 04:13:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:36.473 04:13:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74187 00:10:36.473 04:13:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:36.473 04:13:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:36.473 killing process with pid 74187 00:10:36.473 04:13:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74187' 00:10:36.473 04:13:46 -- common/autotest_common.sh@955 -- # kill 74187 00:10:36.473 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.473 00:10:36.473 Latency(us) 00:10:36.473 [2024-12-06T04:13:49.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.473 [2024-12-06T04:13:49.038Z] =================================================================================================================== 00:10:36.473 [2024-12-06T04:13:49.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.474 04:13:46 -- common/autotest_common.sh@960 -- # wait 74187 00:10:36.474 04:13:47 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:36.474 04:13:47 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:36.474 04:13:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:36.474 04:13:47 -- nvmf/common.sh@116 -- # sync 00:10:36.474 04:13:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:36.474 04:13:47 -- nvmf/common.sh@119 -- # set +e 00:10:36.474 04:13:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:36.474 04:13:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:36.474 rmmod nvme_tcp 00:10:36.474 rmmod nvme_fabrics 00:10:36.474 rmmod nvme_keyring 00:10:36.474 04:13:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:36.474 04:13:47 -- nvmf/common.sh@123 -- # set -e 00:10:36.474 04:13:47 -- nvmf/common.sh@124 -- # return 0 00:10:36.474 04:13:47 -- nvmf/common.sh@477 -- # '[' -n 74155 ']' 00:10:36.474 04:13:47 -- nvmf/common.sh@478 -- # killprocess 74155 00:10:36.474 04:13:47 -- common/autotest_common.sh@936 -- # '[' -z 74155 ']' 00:10:36.474 04:13:47 -- common/autotest_common.sh@940 -- # kill -0 74155 00:10:36.474 04:13:47 -- common/autotest_common.sh@941 -- # uname 00:10:36.474 04:13:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:36.474 04:13:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74155 00:10:36.474 04:13:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:36.474 04:13:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:36.474 killing process with pid 74155 00:10:36.474 04:13:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74155' 00:10:36.474 04:13:47 -- common/autotest_common.sh@955 -- # kill 74155 00:10:36.474 04:13:47 -- common/autotest_common.sh@960 -- # wait 74155 00:10:36.474 04:13:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:36.474 04:13:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:36.474 04:13:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:36.474 04:13:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.474 04:13:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:36.474 04:13:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.474 04:13:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.474 04:13:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.474 04:13:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:36.474 00:10:36.474 real 0m13.894s 00:10:36.474 user 0m23.881s 00:10:36.474 sys 0m2.320s 00:10:36.474 04:13:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:36.474 ************************************ 00:10:36.474 END TEST nvmf_queue_depth 00:10:36.474 ************************************ 00:10:36.474 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:10:36.474 04:13:47 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:36.474 04:13:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.474 04:13:47 -- common/autotest_common.sh@10 -- # set +x 00:10:36.474 ************************************ 00:10:36.474 START TEST nvmf_multipath 00:10:36.474 ************************************ 00:10:36.474 04:13:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:36.474 * Looking for test storage... 00:10:36.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.474 04:13:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:36.474 04:13:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:36.474 04:13:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:36.474 04:13:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:36.474 04:13:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:36.474 04:13:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:36.474 04:13:47 -- scripts/common.sh@335 -- # IFS=.-: 00:10:36.474 04:13:47 -- scripts/common.sh@335 -- # read -ra ver1 00:10:36.474 04:13:47 -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.474 04:13:47 -- scripts/common.sh@336 -- # read -ra ver2 00:10:36.474 04:13:47 -- scripts/common.sh@337 -- # local 'op=<' 00:10:36.474 04:13:47 -- scripts/common.sh@339 -- # ver1_l=2 00:10:36.474 04:13:47 -- scripts/common.sh@340 -- # ver2_l=1 00:10:36.474 04:13:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:36.474 04:13:47 -- scripts/common.sh@343 -- # case "$op" in 00:10:36.474 04:13:47 -- scripts/common.sh@344 -- # : 1 00:10:36.474 04:13:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:36.474 04:13:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.474 04:13:47 -- scripts/common.sh@364 -- # decimal 1 00:10:36.474 04:13:47 -- scripts/common.sh@352 -- # local d=1 00:10:36.474 04:13:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.474 04:13:47 -- scripts/common.sh@354 -- # echo 1 00:10:36.474 04:13:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:36.474 04:13:47 -- scripts/common.sh@365 -- # decimal 2 00:10:36.474 04:13:47 -- scripts/common.sh@352 -- # local d=2 00:10:36.474 04:13:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.474 04:13:47 -- scripts/common.sh@354 -- # echo 2 00:10:36.474 04:13:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:36.474 04:13:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:36.474 04:13:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:36.474 04:13:47 -- scripts/common.sh@367 -- # return 0 00:10:36.474 04:13:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.474 --rc genhtml_branch_coverage=1 00:10:36.474 --rc genhtml_function_coverage=1 00:10:36.474 --rc genhtml_legend=1 00:10:36.474 --rc geninfo_all_blocks=1 00:10:36.474 --rc geninfo_unexecuted_blocks=1 00:10:36.474 00:10:36.474 ' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.474 --rc genhtml_branch_coverage=1 00:10:36.474 --rc genhtml_function_coverage=1 00:10:36.474 --rc genhtml_legend=1 00:10:36.474 --rc geninfo_all_blocks=1 00:10:36.474 --rc geninfo_unexecuted_blocks=1 00:10:36.474 00:10:36.474 ' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.474 --rc genhtml_branch_coverage=1 00:10:36.474 --rc genhtml_function_coverage=1 00:10:36.474 --rc genhtml_legend=1 00:10:36.474 --rc geninfo_all_blocks=1 00:10:36.474 --rc geninfo_unexecuted_blocks=1 00:10:36.474 00:10:36.474 ' 00:10:36.474 04:13:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.474 --rc genhtml_branch_coverage=1 00:10:36.474 --rc genhtml_function_coverage=1 00:10:36.474 --rc genhtml_legend=1 00:10:36.474 --rc geninfo_all_blocks=1 00:10:36.474 --rc geninfo_unexecuted_blocks=1 00:10:36.474 00:10:36.474 ' 00:10:36.474 04:13:47 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.474 04:13:47 -- nvmf/common.sh@7 -- # uname -s 00:10:36.474 04:13:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.474 04:13:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.474 04:13:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.474 04:13:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.474 04:13:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.474 04:13:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.474 04:13:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.474 04:13:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.474 04:13:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.474 04:13:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.474 04:13:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:36.474 04:13:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:36.474 04:13:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.474 04:13:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.474 04:13:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.474 04:13:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.474 04:13:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.474 04:13:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.474 04:13:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.474 04:13:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 04:13:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 04:13:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.474 04:13:47 -- paths/export.sh@5 -- # export PATH 00:10:36.475 04:13:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.475 04:13:47 -- nvmf/common.sh@46 -- # : 0 00:10:36.475 04:13:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:36.475 04:13:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:36.475 04:13:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:36.475 04:13:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.475 04:13:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.475 04:13:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:36.475 04:13:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:36.475 04:13:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:36.475 04:13:47 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.475 04:13:47 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.475 04:13:47 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:36.475 04:13:47 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.475 04:13:47 -- target/multipath.sh@43 -- # nvmftestinit 00:10:36.475 04:13:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:36.475 04:13:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.475 04:13:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:36.475 04:13:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:36.475 04:13:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:36.475 04:13:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.475 04:13:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.475 04:13:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.475 04:13:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:36.475 04:13:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:36.475 04:13:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:36.475 04:13:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:36.475 04:13:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:36.475 04:13:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:36.475 04:13:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.475 04:13:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.475 04:13:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:36.475 04:13:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:36.475 04:13:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.475 04:13:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.475 04:13:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.475 04:13:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.475 04:13:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.475 04:13:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.475 04:13:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.475 04:13:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.475 04:13:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:36.475 04:13:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:36.475 Cannot find device "nvmf_tgt_br" 00:10:36.475 04:13:47 -- nvmf/common.sh@154 -- # true 00:10:36.475 04:13:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.475 Cannot find device "nvmf_tgt_br2" 00:10:36.475 04:13:48 -- nvmf/common.sh@155 -- # true 00:10:36.475 04:13:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:36.475 04:13:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:36.475 Cannot find device "nvmf_tgt_br" 00:10:36.475 04:13:48 -- nvmf/common.sh@157 -- # true 00:10:36.475 04:13:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:36.475 Cannot find device "nvmf_tgt_br2" 00:10:36.475 04:13:48 -- nvmf/common.sh@158 -- # true 00:10:36.475 04:13:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:36.475 04:13:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:36.475 04:13:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.475 04:13:48 -- nvmf/common.sh@161 -- # true 00:10:36.475 04:13:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.475 04:13:48 -- nvmf/common.sh@162 -- # true 00:10:36.475 04:13:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.475 04:13:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.475 04:13:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.475 04:13:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.475 04:13:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.475 04:13:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.475 04:13:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.475 04:13:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:36.475 04:13:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:36.475 04:13:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:36.475 04:13:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:36.475 04:13:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:36.475 04:13:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:36.475 04:13:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.475 04:13:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.475 04:13:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.475 04:13:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:36.475 04:13:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:36.475 04:13:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.475 04:13:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.475 04:13:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.475 04:13:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.475 04:13:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.475 04:13:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:36.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:10:36.475 00:10:36.475 --- 10.0.0.2 ping statistics --- 00:10:36.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.475 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:10:36.475 04:13:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:36.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:10:36.475 00:10:36.475 --- 10.0.0.3 ping statistics --- 00:10:36.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.475 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:36.475 04:13:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:36.475 00:10:36.475 --- 10.0.0.1 ping statistics --- 00:10:36.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.475 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:36.475 04:13:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.475 04:13:48 -- nvmf/common.sh@421 -- # return 0 00:10:36.475 04:13:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:36.475 04:13:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.475 04:13:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:36.475 04:13:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:36.475 04:13:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.475 04:13:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:36.475 04:13:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:36.475 04:13:48 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:36.475 04:13:48 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:36.475 04:13:48 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:36.475 04:13:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:36.475 04:13:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.475 04:13:48 -- common/autotest_common.sh@10 -- # set +x 00:10:36.475 04:13:48 -- nvmf/common.sh@469 -- # nvmfpid=74522 00:10:36.475 04:13:48 -- nvmf/common.sh@470 -- # waitforlisten 74522 00:10:36.475 04:13:48 -- common/autotest_common.sh@829 -- # '[' -z 74522 ']' 00:10:36.475 04:13:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.475 04:13:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.475 04:13:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.475 04:13:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.475 04:13:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.475 04:13:48 -- common/autotest_common.sh@10 -- # set +x 00:10:36.475 [2024-12-06 04:13:48.394095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:36.475 [2024-12-06 04:13:48.394224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.475 [2024-12-06 04:13:48.537582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.475 [2024-12-06 04:13:48.670916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:36.475 [2024-12-06 04:13:48.671156] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.475 [2024-12-06 04:13:48.671172] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.475 [2024-12-06 04:13:48.671181] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.475 [2024-12-06 04:13:48.671414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.475 [2024-12-06 04:13:48.671492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.475 [2024-12-06 04:13:48.672192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.475 [2024-12-06 04:13:48.672247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.053 04:13:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.053 04:13:49 -- common/autotest_common.sh@862 -- # return 0 00:10:37.053 04:13:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:37.053 04:13:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.053 04:13:49 -- common/autotest_common.sh@10 -- # set +x 00:10:37.053 04:13:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.053 04:13:49 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.310 [2024-12-06 04:13:49.788028] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.310 04:13:49 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:37.569 Malloc0 00:10:37.827 04:13:50 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:37.827 04:13:50 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.394 04:13:50 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.394 [2024-12-06 04:13:50.910278] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.394 04:13:50 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:38.653 [2024-12-06 04:13:51.162660] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:38.653 04:13:51 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:38.913 04:13:51 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:38.913 04:13:51 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.913 04:13:51 -- common/autotest_common.sh@1187 -- # local i=0 00:10:38.913 04:13:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.913 04:13:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:38.913 04:13:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:41.456 04:13:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:41.456 04:13:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:41.456 04:13:53 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.456 04:13:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:41.456 04:13:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.456 04:13:53 -- common/autotest_common.sh@1197 -- # return 0 00:10:41.457 04:13:53 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:41.457 04:13:53 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:41.457 04:13:53 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:41.457 04:13:53 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:41.457 04:13:53 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:41.457 04:13:53 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:41.457 04:13:53 -- target/multipath.sh@38 -- # return 0 00:10:41.457 04:13:53 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:41.457 04:13:53 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:41.457 04:13:53 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:41.457 04:13:53 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:41.457 04:13:53 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:41.457 04:13:53 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:41.457 04:13:53 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:41.457 04:13:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:41.457 04:13:53 -- target/multipath.sh@22 -- # local timeout=20 00:10:41.457 04:13:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:41.457 04:13:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:41.457 04:13:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:41.457 04:13:53 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:41.457 04:13:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:41.457 04:13:53 -- target/multipath.sh@22 -- # local timeout=20 00:10:41.457 04:13:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:41.457 04:13:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:41.457 04:13:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:41.457 04:13:53 -- target/multipath.sh@85 -- # echo numa 00:10:41.457 04:13:53 -- target/multipath.sh@88 -- # fio_pid=74617 00:10:41.457 04:13:53 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:41.457 04:13:53 -- target/multipath.sh@90 -- # sleep 1 00:10:41.457 [global] 00:10:41.457 thread=1 00:10:41.457 invalidate=1 00:10:41.457 rw=randrw 00:10:41.457 time_based=1 00:10:41.457 runtime=6 00:10:41.457 ioengine=libaio 00:10:41.457 direct=1 00:10:41.457 bs=4096 00:10:41.457 iodepth=128 00:10:41.457 norandommap=0 00:10:41.457 numjobs=1 00:10:41.457 00:10:41.457 verify_dump=1 00:10:41.457 verify_backlog=512 00:10:41.457 verify_state_save=0 00:10:41.457 do_verify=1 00:10:41.457 verify=crc32c-intel 00:10:41.457 [job0] 00:10:41.457 filename=/dev/nvme0n1 00:10:41.457 Could not set queue depth (nvme0n1) 00:10:41.457 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:41.457 fio-3.35 00:10:41.457 Starting 1 thread 00:10:42.024 04:13:54 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:42.308 04:13:54 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:42.567 04:13:55 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:42.567 04:13:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:42.567 04:13:55 -- target/multipath.sh@22 -- # local timeout=20 00:10:42.567 04:13:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:42.567 04:13:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:42.567 04:13:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:42.567 04:13:55 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:42.567 04:13:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:42.567 04:13:55 -- target/multipath.sh@22 -- # local timeout=20 00:10:42.567 04:13:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:42.567 04:13:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.567 04:13:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:42.567 04:13:55 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:42.824 04:13:55 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:43.390 04:13:55 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:43.390 04:13:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:43.390 04:13:55 -- target/multipath.sh@22 -- # local timeout=20 00:10:43.390 04:13:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:43.390 04:13:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:43.390 04:13:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:43.390 04:13:55 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:43.390 04:13:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:43.390 04:13:55 -- target/multipath.sh@22 -- # local timeout=20 00:10:43.390 04:13:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:43.390 04:13:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.390 04:13:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:43.390 04:13:55 -- target/multipath.sh@104 -- # wait 74617 00:10:47.579 00:10:47.579 job0: (groupid=0, jobs=1): err= 0: pid=74638: Fri Dec 6 04:13:59 2024 00:10:47.579 read: IOPS=9790, BW=38.2MiB/s (40.1MB/s)(230MiB/6007msec) 00:10:47.579 slat (usec): min=4, max=8916, avg=60.14, stdev=244.95 00:10:47.579 clat (usec): min=2279, max=19067, avg=8857.68, stdev=1550.68 00:10:47.579 lat (usec): min=2289, max=19077, avg=8917.82, stdev=1556.35 00:10:47.579 clat percentiles (usec): 00:10:47.579 | 1.00th=[ 4817], 5.00th=[ 6915], 10.00th=[ 7504], 20.00th=[ 7963], 00:10:47.579 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:47.579 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[12387], 00:10:47.579 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15795], 99.95th=[16909], 00:10:47.579 | 99.99th=[17957] 00:10:47.579 bw ( KiB/s): min=10680, max=26586, per=51.61%, avg=20210.73, stdev=5169.23, samples=11 00:10:47.579 iops : min= 2670, max= 6646, avg=5052.64, stdev=1292.25, samples=11 00:10:47.579 write: IOPS=5991, BW=23.4MiB/s (24.5MB/s)(120MiB/5134msec); 0 zone resets 00:10:47.579 slat (usec): min=15, max=3197, avg=69.61, stdev=176.23 00:10:47.579 clat (usec): min=1471, max=17940, avg=7834.43, stdev=1437.77 00:10:47.579 lat (usec): min=1499, max=17963, avg=7904.04, stdev=1442.96 00:10:47.579 clat percentiles (usec): 00:10:47.579 | 1.00th=[ 3687], 5.00th=[ 4752], 10.00th=[ 6325], 20.00th=[ 7177], 00:10:47.579 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8160], 00:10:47.579 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:10:47.579 | 99.00th=[12256], 99.50th=[13173], 99.90th=[15139], 99.95th=[16057], 00:10:47.579 | 99.99th=[17957] 00:10:47.579 bw ( KiB/s): min=10920, max=26139, per=84.57%, avg=20268.09, stdev=4825.00, samples=11 00:10:47.579 iops : min= 2730, max= 6534, avg=5066.91, stdev=1206.13, samples=11 00:10:47.579 lat (msec) : 2=0.01%, 4=0.83%, 10=89.44%, 20=9.73% 00:10:47.579 cpu : usr=5.39%, sys=20.30%, ctx=5296, majf=0, minf=127 00:10:47.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:47.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.579 issued rwts: total=58811,30759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.579 00:10:47.579 Run status group 0 (all jobs): 00:10:47.579 READ: bw=38.2MiB/s (40.1MB/s), 38.2MiB/s-38.2MiB/s (40.1MB/s-40.1MB/s), io=230MiB (241MB), run=6007-6007msec 00:10:47.579 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=120MiB (126MB), run=5134-5134msec 00:10:47.579 00:10:47.579 Disk stats (read/write): 00:10:47.579 nvme0n1: ios=57955/30165, merge=0/0, ticks=492804/222526, in_queue=715330, util=98.65% 00:10:47.579 04:13:59 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:47.579 04:14:00 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:48.148 04:14:00 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:48.148 04:14:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:48.148 04:14:00 -- target/multipath.sh@22 -- # local timeout=20 00:10:48.148 04:14:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:48.148 04:14:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:48.148 04:14:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.148 04:14:00 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:48.148 04:14:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:48.148 04:14:00 -- target/multipath.sh@22 -- # local timeout=20 00:10:48.148 04:14:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:48.148 04:14:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.148 04:14:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.148 04:14:00 -- target/multipath.sh@113 -- # echo round-robin 00:10:48.148 04:14:00 -- target/multipath.sh@116 -- # fio_pid=74720 00:10:48.148 04:14:00 -- target/multipath.sh@118 -- # sleep 1 00:10:48.148 04:14:00 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:48.148 [global] 00:10:48.148 thread=1 00:10:48.148 invalidate=1 00:10:48.148 rw=randrw 00:10:48.148 time_based=1 00:10:48.148 runtime=6 00:10:48.148 ioengine=libaio 00:10:48.148 direct=1 00:10:48.148 bs=4096 00:10:48.148 iodepth=128 00:10:48.148 norandommap=0 00:10:48.148 numjobs=1 00:10:48.148 00:10:48.148 verify_dump=1 00:10:48.148 verify_backlog=512 00:10:48.148 verify_state_save=0 00:10:48.148 do_verify=1 00:10:48.148 verify=crc32c-intel 00:10:48.148 [job0] 00:10:48.148 filename=/dev/nvme0n1 00:10:48.148 Could not set queue depth (nvme0n1) 00:10:48.148 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.148 fio-3.35 00:10:48.148 Starting 1 thread 00:10:49.118 04:14:01 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:49.377 04:14:01 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:49.636 04:14:02 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:49.636 04:14:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:49.636 04:14:02 -- target/multipath.sh@22 -- # local timeout=20 00:10:49.636 04:14:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:49.636 04:14:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:49.636 04:14:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:49.636 04:14:02 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:49.636 04:14:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:49.636 04:14:02 -- target/multipath.sh@22 -- # local timeout=20 00:10:49.636 04:14:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:49.636 04:14:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:49.636 04:14:02 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:49.636 04:14:02 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:49.896 04:14:02 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:50.155 04:14:02 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:50.155 04:14:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:50.155 04:14:02 -- target/multipath.sh@22 -- # local timeout=20 00:10:50.155 04:14:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:50.155 04:14:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:50.155 04:14:02 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:50.155 04:14:02 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:50.155 04:14:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:50.155 04:14:02 -- target/multipath.sh@22 -- # local timeout=20 00:10:50.155 04:14:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:50.155 04:14:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:50.155 04:14:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:50.155 04:14:02 -- target/multipath.sh@132 -- # wait 74720 00:10:54.347 00:10:54.347 job0: (groupid=0, jobs=1): err= 0: pid=74741: Fri Dec 6 04:14:06 2024 00:10:54.347 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6002msec) 00:10:54.347 slat (usec): min=4, max=6812, avg=48.64, stdev=221.31 00:10:54.347 clat (usec): min=341, max=20283, avg=8318.77, stdev=2362.39 00:10:54.347 lat (usec): min=354, max=20292, avg=8367.41, stdev=2370.53 00:10:54.347 clat percentiles (usec): 00:10:54.347 | 1.00th=[ 2376], 5.00th=[ 4228], 10.00th=[ 5211], 20.00th=[ 6915], 00:10:54.347 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:10:54.347 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10945], 95.00th=[12649], 00:10:54.347 | 99.00th=[14877], 99.50th=[16319], 99.90th=[18220], 99.95th=[19530], 00:10:54.347 | 99.99th=[19792] 00:10:54.347 bw ( KiB/s): min= 9704, max=40496, per=52.86%, avg=22342.64, stdev=7822.40, samples=11 00:10:54.347 iops : min= 2426, max=10124, avg=5585.64, stdev=1955.60, samples=11 00:10:54.347 write: IOPS=6446, BW=25.2MiB/s (26.4MB/s)(132MiB/5240msec); 0 zone resets 00:10:54.347 slat (usec): min=11, max=2606, avg=55.36, stdev=143.80 00:10:54.347 clat (usec): min=698, max=18586, avg=6944.11, stdev=2116.16 00:10:54.347 lat (usec): min=736, max=18607, avg=6999.48, stdev=2127.49 00:10:54.347 clat percentiles (usec): 00:10:54.347 | 1.00th=[ 2212], 5.00th=[ 3163], 10.00th=[ 3720], 20.00th=[ 4686], 00:10:54.347 | 30.00th=[ 6259], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7832], 00:10:54.347 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9634], 00:10:54.347 | 99.00th=[11994], 99.50th=[12649], 99.90th=[15795], 99.95th=[16909], 00:10:54.347 | 99.99th=[18482] 00:10:54.347 bw ( KiB/s): min= 9968, max=40960, per=86.75%, avg=22371.27, stdev=7772.54, samples=11 00:10:54.347 iops : min= 2492, max=10240, avg=5592.82, stdev=1943.14, samples=11 00:10:54.347 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.07% 00:10:54.347 lat (msec) : 2=0.52%, 4=6.61%, 10=82.12%, 20=10.62%, 50=0.01% 00:10:54.347 cpu : usr=5.37%, sys=21.30%, ctx=5585, majf=0, minf=127 00:10:54.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:54.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:54.347 issued rwts: total=63415,33781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:54.347 00:10:54.347 Run status group 0 (all jobs): 00:10:54.347 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6002-6002msec 00:10:54.347 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=132MiB (138MB), run=5240-5240msec 00:10:54.347 00:10:54.347 Disk stats (read/write): 00:10:54.347 nvme0n1: ios=62513/33269, merge=0/0, ticks=498622/216954, in_queue=715576, util=98.71% 00:10:54.347 04:14:06 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:54.606 04:14:06 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.606 04:14:06 -- common/autotest_common.sh@1208 -- # local i=0 00:10:54.606 04:14:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:54.606 04:14:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.606 04:14:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:54.606 04:14:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.606 04:14:06 -- common/autotest_common.sh@1220 -- # return 0 00:10:54.606 04:14:06 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.865 04:14:07 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:54.865 04:14:07 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:54.865 04:14:07 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:54.865 04:14:07 -- target/multipath.sh@144 -- # nvmftestfini 00:10:54.865 04:14:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:54.865 04:14:07 -- nvmf/common.sh@116 -- # sync 00:10:54.865 04:14:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:54.865 04:14:07 -- nvmf/common.sh@119 -- # set +e 00:10:54.865 04:14:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:54.865 04:14:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:54.865 rmmod nvme_tcp 00:10:54.865 rmmod nvme_fabrics 00:10:54.865 rmmod nvme_keyring 00:10:54.865 04:14:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:54.865 04:14:07 -- nvmf/common.sh@123 -- # set -e 00:10:54.865 04:14:07 -- nvmf/common.sh@124 -- # return 0 00:10:54.865 04:14:07 -- nvmf/common.sh@477 -- # '[' -n 74522 ']' 00:10:54.865 04:14:07 -- nvmf/common.sh@478 -- # killprocess 74522 00:10:54.865 04:14:07 -- common/autotest_common.sh@936 -- # '[' -z 74522 ']' 00:10:54.865 04:14:07 -- common/autotest_common.sh@940 -- # kill -0 74522 00:10:54.865 04:14:07 -- common/autotest_common.sh@941 -- # uname 00:10:54.865 04:14:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:54.865 04:14:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74522 00:10:54.865 killing process with pid 74522 00:10:54.865 04:14:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:54.865 04:14:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:54.865 04:14:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74522' 00:10:54.865 04:14:07 -- common/autotest_common.sh@955 -- # kill 74522 00:10:54.865 04:14:07 -- common/autotest_common.sh@960 -- # wait 74522 00:10:55.432 04:14:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:55.432 04:14:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:55.432 04:14:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:55.432 04:14:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.432 04:14:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:55.432 04:14:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.432 04:14:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.432 04:14:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.432 04:14:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:55.432 00:10:55.432 real 0m20.035s 00:10:55.432 user 1m15.899s 00:10:55.432 sys 0m8.691s 00:10:55.432 04:14:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:55.432 04:14:07 -- common/autotest_common.sh@10 -- # set +x 00:10:55.432 ************************************ 00:10:55.432 END TEST nvmf_multipath 00:10:55.432 ************************************ 00:10:55.432 04:14:07 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:55.432 04:14:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:55.432 04:14:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.432 04:14:07 -- common/autotest_common.sh@10 -- # set +x 00:10:55.432 ************************************ 00:10:55.432 START TEST nvmf_zcopy 00:10:55.432 ************************************ 00:10:55.432 04:14:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:55.432 * Looking for test storage... 00:10:55.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.432 04:14:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:55.432 04:14:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:55.432 04:14:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:55.691 04:14:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:55.691 04:14:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:55.691 04:14:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:55.691 04:14:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:55.691 04:14:07 -- scripts/common.sh@335 -- # IFS=.-: 00:10:55.691 04:14:07 -- scripts/common.sh@335 -- # read -ra ver1 00:10:55.691 04:14:07 -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.691 04:14:07 -- scripts/common.sh@336 -- # read -ra ver2 00:10:55.691 04:14:07 -- scripts/common.sh@337 -- # local 'op=<' 00:10:55.691 04:14:07 -- scripts/common.sh@339 -- # ver1_l=2 00:10:55.691 04:14:07 -- scripts/common.sh@340 -- # ver2_l=1 00:10:55.691 04:14:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:55.691 04:14:07 -- scripts/common.sh@343 -- # case "$op" in 00:10:55.691 04:14:07 -- scripts/common.sh@344 -- # : 1 00:10:55.691 04:14:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:55.691 04:14:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.691 04:14:07 -- scripts/common.sh@364 -- # decimal 1 00:10:55.691 04:14:08 -- scripts/common.sh@352 -- # local d=1 00:10:55.691 04:14:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.691 04:14:08 -- scripts/common.sh@354 -- # echo 1 00:10:55.691 04:14:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:55.691 04:14:08 -- scripts/common.sh@365 -- # decimal 2 00:10:55.691 04:14:08 -- scripts/common.sh@352 -- # local d=2 00:10:55.691 04:14:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.691 04:14:08 -- scripts/common.sh@354 -- # echo 2 00:10:55.691 04:14:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:55.691 04:14:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:55.691 04:14:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:55.691 04:14:08 -- scripts/common.sh@367 -- # return 0 00:10:55.691 04:14:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.691 04:14:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:55.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.691 --rc genhtml_branch_coverage=1 00:10:55.691 --rc genhtml_function_coverage=1 00:10:55.691 --rc genhtml_legend=1 00:10:55.691 --rc geninfo_all_blocks=1 00:10:55.691 --rc geninfo_unexecuted_blocks=1 00:10:55.692 00:10:55.692 ' 00:10:55.692 04:14:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:55.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.692 --rc genhtml_branch_coverage=1 00:10:55.692 --rc genhtml_function_coverage=1 00:10:55.692 --rc genhtml_legend=1 00:10:55.692 --rc geninfo_all_blocks=1 00:10:55.692 --rc geninfo_unexecuted_blocks=1 00:10:55.692 00:10:55.692 ' 00:10:55.692 04:14:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:55.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.692 --rc genhtml_branch_coverage=1 00:10:55.692 --rc genhtml_function_coverage=1 00:10:55.692 --rc genhtml_legend=1 00:10:55.692 --rc geninfo_all_blocks=1 00:10:55.692 --rc geninfo_unexecuted_blocks=1 00:10:55.692 00:10:55.692 ' 00:10:55.692 04:14:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:55.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.692 --rc genhtml_branch_coverage=1 00:10:55.692 --rc genhtml_function_coverage=1 00:10:55.692 --rc genhtml_legend=1 00:10:55.692 --rc geninfo_all_blocks=1 00:10:55.692 --rc geninfo_unexecuted_blocks=1 00:10:55.692 00:10:55.692 ' 00:10:55.692 04:14:08 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.692 04:14:08 -- nvmf/common.sh@7 -- # uname -s 00:10:55.692 04:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.692 04:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.692 04:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.692 04:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.692 04:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.692 04:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.692 04:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.692 04:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.692 04:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.692 04:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:55.692 04:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:10:55.692 04:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.692 04:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.692 04:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.692 04:14:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.692 04:14:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.692 04:14:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.692 04:14:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.692 04:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.692 04:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.692 04:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.692 04:14:08 -- paths/export.sh@5 -- # export PATH 00:10:55.692 04:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.692 04:14:08 -- nvmf/common.sh@46 -- # : 0 00:10:55.692 04:14:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:55.692 04:14:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:55.692 04:14:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:55.692 04:14:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.692 04:14:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.692 04:14:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:55.692 04:14:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:55.692 04:14:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:55.692 04:14:08 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:55.692 04:14:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:55.692 04:14:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.692 04:14:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:55.692 04:14:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:55.692 04:14:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:55.692 04:14:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.692 04:14:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.692 04:14:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.692 04:14:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:55.692 04:14:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:55.692 04:14:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.692 04:14:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.692 04:14:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:55.692 04:14:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:55.692 04:14:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.692 04:14:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.692 04:14:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.692 04:14:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.692 04:14:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.692 04:14:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.692 04:14:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.692 04:14:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.692 04:14:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:55.692 04:14:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:55.692 Cannot find device "nvmf_tgt_br" 00:10:55.692 04:14:08 -- nvmf/common.sh@154 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.692 Cannot find device "nvmf_tgt_br2" 00:10:55.692 04:14:08 -- nvmf/common.sh@155 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:55.692 04:14:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:55.692 Cannot find device "nvmf_tgt_br" 00:10:55.692 04:14:08 -- nvmf/common.sh@157 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:55.692 Cannot find device "nvmf_tgt_br2" 00:10:55.692 04:14:08 -- nvmf/common.sh@158 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:55.692 04:14:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:55.692 04:14:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.692 04:14:08 -- nvmf/common.sh@161 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.692 04:14:08 -- nvmf/common.sh@162 -- # true 00:10:55.692 04:14:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.692 04:14:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.692 04:14:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.692 04:14:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.692 04:14:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.692 04:14:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.692 04:14:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.692 04:14:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:55.951 04:14:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:55.951 04:14:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:55.951 04:14:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:55.951 04:14:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:55.951 04:14:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:55.951 04:14:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.951 04:14:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.951 04:14:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.951 04:14:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:55.951 04:14:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:55.951 04:14:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.951 04:14:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.951 04:14:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.951 04:14:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.951 04:14:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.951 04:14:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:55.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:55.951 00:10:55.951 --- 10.0.0.2 ping statistics --- 00:10:55.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.951 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:55.951 04:14:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:55.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:55.951 00:10:55.951 --- 10.0.0.3 ping statistics --- 00:10:55.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.951 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:55.951 04:14:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:55.951 00:10:55.951 --- 10.0.0.1 ping statistics --- 00:10:55.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.951 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:55.951 04:14:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.951 04:14:08 -- nvmf/common.sh@421 -- # return 0 00:10:55.951 04:14:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:55.951 04:14:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.951 04:14:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:55.951 04:14:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:55.951 04:14:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.951 04:14:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:55.951 04:14:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:55.951 04:14:08 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:55.951 04:14:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:55.951 04:14:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.951 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.951 04:14:08 -- nvmf/common.sh@469 -- # nvmfpid=75002 00:10:55.951 04:14:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:55.951 04:14:08 -- nvmf/common.sh@470 -- # waitforlisten 75002 00:10:55.951 04:14:08 -- common/autotest_common.sh@829 -- # '[' -z 75002 ']' 00:10:55.951 04:14:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.951 04:14:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.951 04:14:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.951 04:14:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.951 04:14:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.951 [2024-12-06 04:14:08.440479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.951 [2024-12-06 04:14:08.440625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.221 [2024-12-06 04:14:08.579244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.221 [2024-12-06 04:14:08.666806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.221 [2024-12-06 04:14:08.666952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.221 [2024-12-06 04:14:08.666966] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.221 [2024-12-06 04:14:08.666983] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.221 [2024-12-06 04:14:08.667008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.174 04:14:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.174 04:14:09 -- common/autotest_common.sh@862 -- # return 0 00:10:57.174 04:14:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:57.174 04:14:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 04:14:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.174 04:14:09 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:57.174 04:14:09 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 [2024-12-06 04:14:09.517018] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 [2024-12-06 04:14:09.533174] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 malloc0 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:57.174 04:14:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.174 04:14:09 -- common/autotest_common.sh@10 -- # set +x 00:10:57.174 04:14:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.174 04:14:09 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:57.174 04:14:09 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:57.174 04:14:09 -- nvmf/common.sh@520 -- # config=() 00:10:57.174 04:14:09 -- nvmf/common.sh@520 -- # local subsystem config 00:10:57.174 04:14:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:57.174 04:14:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:57.174 { 00:10:57.174 "params": { 00:10:57.174 "name": "Nvme$subsystem", 00:10:57.174 "trtype": "$TEST_TRANSPORT", 00:10:57.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.174 "adrfam": "ipv4", 00:10:57.174 "trsvcid": "$NVMF_PORT", 00:10:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.174 "hdgst": ${hdgst:-false}, 00:10:57.174 "ddgst": ${ddgst:-false} 00:10:57.174 }, 00:10:57.174 "method": "bdev_nvme_attach_controller" 00:10:57.174 } 00:10:57.174 EOF 00:10:57.174 )") 00:10:57.174 04:14:09 -- nvmf/common.sh@542 -- # cat 00:10:57.174 04:14:09 -- nvmf/common.sh@544 -- # jq . 00:10:57.174 04:14:09 -- nvmf/common.sh@545 -- # IFS=, 00:10:57.174 04:14:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:57.174 "params": { 00:10:57.174 "name": "Nvme1", 00:10:57.174 "trtype": "tcp", 00:10:57.174 "traddr": "10.0.0.2", 00:10:57.174 "adrfam": "ipv4", 00:10:57.174 "trsvcid": "4420", 00:10:57.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.174 "hdgst": false, 00:10:57.174 "ddgst": false 00:10:57.174 }, 00:10:57.174 "method": "bdev_nvme_attach_controller" 00:10:57.174 }' 00:10:57.174 [2024-12-06 04:14:09.629209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:57.174 [2024-12-06 04:14:09.629311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75039 ] 00:10:57.434 [2024-12-06 04:14:09.769810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.434 [2024-12-06 04:14:09.843195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.693 Running I/O for 10 seconds... 00:11:07.696 00:11:07.696 Latency(us) 00:11:07.696 [2024-12-06T04:14:20.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.696 [2024-12-06T04:14:20.261Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:07.696 Verification LBA range: start 0x0 length 0x1000 00:11:07.696 Nvme1n1 : 10.01 9222.90 72.05 0.00 0.00 13843.47 1146.88 21090.68 00:11:07.696 [2024-12-06T04:14:20.261Z] =================================================================================================================== 00:11:07.696 [2024-12-06T04:14:20.261Z] Total : 9222.90 72.05 0.00 0.00 13843.47 1146.88 21090.68 00:11:07.955 04:14:20 -- target/zcopy.sh@39 -- # perfpid=75152 00:11:07.955 04:14:20 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:07.955 04:14:20 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:07.955 04:14:20 -- common/autotest_common.sh@10 -- # set +x 00:11:07.955 04:14:20 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:07.955 04:14:20 -- nvmf/common.sh@520 -- # config=() 00:11:07.955 04:14:20 -- nvmf/common.sh@520 -- # local subsystem config 00:11:07.955 04:14:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:07.955 04:14:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:07.955 { 00:11:07.955 "params": { 00:11:07.955 "name": "Nvme$subsystem", 00:11:07.955 "trtype": "$TEST_TRANSPORT", 00:11:07.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:07.955 "adrfam": "ipv4", 00:11:07.955 "trsvcid": "$NVMF_PORT", 00:11:07.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:07.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:07.955 "hdgst": ${hdgst:-false}, 00:11:07.955 "ddgst": ${ddgst:-false} 00:11:07.955 }, 00:11:07.955 "method": "bdev_nvme_attach_controller" 00:11:07.955 } 00:11:07.955 EOF 00:11:07.955 )") 00:11:07.955 04:14:20 -- nvmf/common.sh@542 -- # cat 00:11:07.955 [2024-12-06 04:14:20.327566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.955 [2024-12-06 04:14:20.327616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.955 04:14:20 -- nvmf/common.sh@544 -- # jq . 00:11:07.955 04:14:20 -- nvmf/common.sh@545 -- # IFS=, 00:11:07.955 04:14:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:07.955 "params": { 00:11:07.955 "name": "Nvme1", 00:11:07.955 "trtype": "tcp", 00:11:07.955 "traddr": "10.0.0.2", 00:11:07.955 "adrfam": "ipv4", 00:11:07.956 "trsvcid": "4420", 00:11:07.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:07.956 "hdgst": false, 00:11:07.956 "ddgst": false 00:11:07.956 }, 00:11:07.956 "method": "bdev_nvme_attach_controller" 00:11:07.956 }' 00:11:07.956 [2024-12-06 04:14:20.339540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.339573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.351513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.351572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.363525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.363570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.363647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.956 [2024-12-06 04:14:20.363736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75152 ] 00:11:07.956 [2024-12-06 04:14:20.375557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.375594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.387549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.387605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.399536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.399580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.411538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.411580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.423549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.423591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.435555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.435584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.447562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.447608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.459560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.459604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.471592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.471633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.483561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.483602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.495565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.495607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.956 [2024-12-06 04:14:20.499982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.956 [2024-12-06 04:14:20.507566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:07.956 [2024-12-06 04:14:20.507618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.519608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.519658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.531598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.531648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.543582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.543626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.555625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.555675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.567587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.567632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.579586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.579629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.591609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.591642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.603602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.603629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.615612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.615638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.626010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.215 [2024-12-06 04:14:20.627614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.627645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.639618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.639663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.651625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.651669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.663625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.663670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.675627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.675671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.687641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.687691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.699661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.699714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.711640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.711690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.723677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.723708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.735641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.735685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.747646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.747691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.759695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.759732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.215 [2024-12-06 04:14:20.771686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.215 [2024-12-06 04:14:20.771737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.783701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.783741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.795698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.795751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.807709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.807759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.819723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.819775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 Running I/O for 5 seconds... 00:11:08.475 [2024-12-06 04:14:20.831722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.831769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.849234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.849286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.864101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.864153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.879361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.879440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.888630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.888679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.903696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.903746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.921302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.921367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.936740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.936794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.953753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.953835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.969270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.969323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.978614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.978666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:20.994209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:20.994246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:21.009609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:21.009659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:21.019168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:21.019220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.475 [2024-12-06 04:14:21.035714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.475 [2024-12-06 04:14:21.035770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.051247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.051301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.069000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.069051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.084833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.084898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.101987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.102040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.117434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.117517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.126698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.126765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.142978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.143029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.160663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.160715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.176140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.176177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.192468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.192520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.209167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.209219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.228203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.228254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.243204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.243274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.260623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.260674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.276799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.276851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.735 [2024-12-06 04:14:21.285719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.735 [2024-12-06 04:14:21.285770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.301873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.301927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.319484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.319535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.336152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.336202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.352264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.352314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.370028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.370103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.385153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.385202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.403082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.403132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.417199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.417248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.434204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.434240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.450628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.450677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.466776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.466829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.484453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.484519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.499229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.499280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.516812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.516849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.533812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.533863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:08.995 [2024-12-06 04:14:21.549435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:08.995 [2024-12-06 04:14:21.549501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.567370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.567436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.584083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.584136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.601145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.601181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.617265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.617320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.633822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.633885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.650248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.650288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.666988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.667039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.683054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.683106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.698731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.698801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.717383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.717472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.732029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.732098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.741509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.741573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.758011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.758098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.774551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.774623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.791839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.791903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.254 [2024-12-06 04:14:21.807719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.254 [2024-12-06 04:14:21.807803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.824634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.824701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.841151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.841225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.857749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.857798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.874887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.874945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.891110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.891152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.910282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.910336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.925630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.925667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.934738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.513 [2024-12-06 04:14:21.934805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.513 [2024-12-06 04:14:21.950534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:21.950575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:21.966210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:21.966248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:21.984683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:21.984733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.000101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.000156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.017068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.017122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.032683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.032733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.041808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.041856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.057324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.057374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.514 [2024-12-06 04:14:22.073712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.514 [2024-12-06 04:14:22.073767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.773 [2024-12-06 04:14:22.089873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.773 [2024-12-06 04:14:22.089925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.773 [2024-12-06 04:14:22.107621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.773 [2024-12-06 04:14:22.107674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.773 [2024-12-06 04:14:22.123233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.773 [2024-12-06 04:14:22.123285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.133146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.133210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.148279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.148343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.164806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.164865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.183340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.183417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.198999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.199052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.216426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.216492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.231231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.231288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.240972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.241030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.257779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.257849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.275200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.275271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.291232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.291291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.308937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.308993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:09.774 [2024-12-06 04:14:22.324101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:09.774 [2024-12-06 04:14:22.324152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.342898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.342953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.357160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.357213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.373723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.373777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.388083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.388135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.403358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.403441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.422216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.422252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.436874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.436927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.446689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.446740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.461370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.461453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.471751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.471793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.486997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.487037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.503027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.503065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.521420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.521481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.535863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.535914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.551322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.551371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.570132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.570184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.034 [2024-12-06 04:14:22.585007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.034 [2024-12-06 04:14:22.585056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.602522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.602575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.619607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.619647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.635149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.635198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.652131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.652181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.669051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.669102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.685444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.685492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.702046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.702124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.719061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.719113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.735083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.735152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.752592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.752644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.769236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.769286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.784201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.784255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.801746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.801827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.817012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.817062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.826830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.826892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.294 [2024-12-06 04:14:22.842836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.294 [2024-12-06 04:14:22.842897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.859938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.860070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.875082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.875144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.884760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.884804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.901254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.901366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.917644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.917711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.934931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.934997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.950918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.950981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.968947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.969001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.983718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.983785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:22.994532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:22.994594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.009668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.009727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.026166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.026209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.042847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.042907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.059575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.059637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.075239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.075302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.093821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.093885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.553 [2024-12-06 04:14:23.109238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.553 [2024-12-06 04:14:23.109302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.124451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.124530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.140232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.140296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.156284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.156344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.174325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.174374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.189254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.189310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.206065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.206141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.222112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.812 [2024-12-06 04:14:23.222176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.812 [2024-12-06 04:14:23.240864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.240928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.255583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.255641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.266660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.266711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.282846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.282910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.299639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.299711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.316770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.316853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.332564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.332627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.349107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.349207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:10.813 [2024-12-06 04:14:23.365721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:10.813 [2024-12-06 04:14:23.365808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.381872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.381929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.400356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.400442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.415567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.415632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.434149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.434212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.448464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.448530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.464741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.464827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.481615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.481666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.497157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.497211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.507274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.507353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.522629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.522682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.538472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.538522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.547372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.547449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.563526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.563582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.580219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.580271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.589378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.589453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.604586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.604637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.619115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.619165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.072 [2024-12-06 04:14:23.631619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.072 [2024-12-06 04:14:23.631669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.648025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.648091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.665145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.665177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.680960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.681010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.699408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.699471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.714704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.714800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.731576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.731622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.746902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.746947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.763145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.763227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.779853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.779897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.797915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.797965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.813547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.813612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.828799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.828853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.844063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.844115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.853500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.853539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.866203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.866244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.877691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.877743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.331 [2024-12-06 04:14:23.893706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.331 [2024-12-06 04:14:23.893761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.589 [2024-12-06 04:14:23.910453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.589 [2024-12-06 04:14:23.910508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.589 [2024-12-06 04:14:23.929159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.589 [2024-12-06 04:14:23.929225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:23.944224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:23.944274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:23.953683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:23.953733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:23.969822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:23.969861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:23.986454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:23.986525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.003573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.003612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.020037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.020074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.036380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.036457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.053648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.053697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.069275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.069326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.087840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.087876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.102494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.102544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.120694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.120749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.590 [2024-12-06 04:14:24.136356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.590 [2024-12-06 04:14:24.136437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.154210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.154249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.169930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.169969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.187621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.187673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.202824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.202861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.219809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.219847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.238239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.238278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.256175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.256213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.270999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.271037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.280977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.281015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.296849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.296887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.313886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.313923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.330340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.330416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.346938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.346974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.363470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.363529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.380296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.380348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:11.849 [2024-12-06 04:14:24.397005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:11.849 [2024-12-06 04:14:24.397043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.413692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.413730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.430229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.430268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.445805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.445844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.455819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.455857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.467853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.467905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.483183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.483234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.494168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.494209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.508694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.508745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.525974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.526025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.541219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.541300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.551342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.551439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.567523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.567573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.577667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.577718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.592704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.592755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.607634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.607685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.625585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.625635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.640426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.640486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.650720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.650782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.108 [2024-12-06 04:14:24.666174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.108 [2024-12-06 04:14:24.666212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.682250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.682290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.700060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.700097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.715020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.715058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.724842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.724878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.740908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.740959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.757382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.757458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.774461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.774511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.791206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.791256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.809047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.809116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.824259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.824311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.834584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.834622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.850520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.850556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.865731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.865770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.882242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.882294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.897902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.897953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.368 [2024-12-06 04:14:24.918227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.368 [2024-12-06 04:14:24.918293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:24.934855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:24.934915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:24.949997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:24.950053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:24.967732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:24.967825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:24.983085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:24.983152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.001333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.001461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.016780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.016833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.026952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.026996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.042126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.042166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.052537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.052589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.069103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.069141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.085275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.085332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.103582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.103633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.119047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.119090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.137955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.137994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.152762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.152804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.169575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.169624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.629 [2024-12-06 04:14:25.185096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.629 [2024-12-06 04:14:25.185134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.204558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.204610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.219560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.219609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.229357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.229400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.245567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.245617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.262467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.262531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.277638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.277685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.287468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.287511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.303114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.303177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.319356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.319439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.338156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.338203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.352221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.352293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.368636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.368689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.386306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.386351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.401206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.401272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.411462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.411525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.427014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.427073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:12.891 [2024-12-06 04:14:25.442289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:12.891 [2024-12-06 04:14:25.442335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.458659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.458747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.474384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.474496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.491908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.491957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.508455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.508511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.524744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.524808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.543295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.543330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.557960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.558008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.567979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.568027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.583519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.583567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.599186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.599234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.618609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.618658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.633721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.633770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.652222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.652272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.666305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.666356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.682115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.682165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.149 [2024-12-06 04:14:25.700160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.149 [2024-12-06 04:14:25.700196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.711726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.711763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.729180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.729231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.745673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.745721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.763150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.763213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.777843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.777919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.793535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.793585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.811557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.811605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.826867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.826916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 [2024-12-06 04:14:25.836601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.406 [2024-12-06 04:14:25.836636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.406 00:11:13.406 Latency(us) 00:11:13.406 [2024-12-06T04:14:25.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.407 [2024-12-06T04:14:25.972Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:13.407 Nvme1n1 : 5.01 11594.79 90.58 0.00 0.00 11026.01 3842.79 23831.27 00:11:13.407 [2024-12-06T04:14:25.972Z] =================================================================================================================== 00:11:13.407 [2024-12-06T04:14:25.972Z] Total : 11594.79 90.58 0.00 0.00 11026.01 3842.79 23831.27 00:11:13.407 [2024-12-06 04:14:25.847302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.847349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.859294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.859339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.871334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.871446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.883312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.883376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.895329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.895413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.907315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.907358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.919313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.919338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.931313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.931355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.943333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.943375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.955322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.955364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.407 [2024-12-06 04:14:25.967338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.407 [2024-12-06 04:14:25.967428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:25.979354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.664 [2024-12-06 04:14:25.979416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:25.991357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.664 [2024-12-06 04:14:25.991433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:26.003368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.664 [2024-12-06 04:14:26.003449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:26.015358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.664 [2024-12-06 04:14:26.015429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:26.027379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.664 [2024-12-06 04:14:26.027480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.664 [2024-12-06 04:14:26.039370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.039422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.051375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.051444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.063376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.063443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.075378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.075443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.087380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.087446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.099388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.099454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.111420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.111471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.123411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.123461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 [2024-12-06 04:14:26.135414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:13.665 [2024-12-06 04:14:26.135480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.665 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75152) - No such process 00:11:13.665 04:14:26 -- target/zcopy.sh@49 -- # wait 75152 00:11:13.665 04:14:26 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.665 04:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.665 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:11:13.665 04:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.665 04:14:26 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:13.665 04:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.665 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:11:13.665 delay0 00:11:13.665 04:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.665 04:14:26 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:13.665 04:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.665 04:14:26 -- common/autotest_common.sh@10 -- # set +x 00:11:13.665 04:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.665 04:14:26 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:13.923 [2024-12-06 04:14:26.333224] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:20.578 Initializing NVMe Controllers 00:11:20.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:20.578 Initialization complete. Launching workers. 00:11:20.578 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 92 00:11:20.578 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 379, failed to submit 33 00:11:20.578 success 258, unsuccess 121, failed 0 00:11:20.578 04:14:32 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:20.578 04:14:32 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:20.578 04:14:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:20.578 04:14:32 -- nvmf/common.sh@116 -- # sync 00:11:20.578 04:14:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:20.578 04:14:32 -- nvmf/common.sh@119 -- # set +e 00:11:20.578 04:14:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:20.578 04:14:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:20.578 rmmod nvme_tcp 00:11:20.578 rmmod nvme_fabrics 00:11:20.578 rmmod nvme_keyring 00:11:20.578 04:14:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:20.578 04:14:32 -- nvmf/common.sh@123 -- # set -e 00:11:20.578 04:14:32 -- nvmf/common.sh@124 -- # return 0 00:11:20.578 04:14:32 -- nvmf/common.sh@477 -- # '[' -n 75002 ']' 00:11:20.578 04:14:32 -- nvmf/common.sh@478 -- # killprocess 75002 00:11:20.578 04:14:32 -- common/autotest_common.sh@936 -- # '[' -z 75002 ']' 00:11:20.578 04:14:32 -- common/autotest_common.sh@940 -- # kill -0 75002 00:11:20.578 04:14:32 -- common/autotest_common.sh@941 -- # uname 00:11:20.578 04:14:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.578 04:14:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75002 00:11:20.578 04:14:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:20.578 killing process with pid 75002 00:11:20.578 04:14:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:20.578 04:14:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75002' 00:11:20.578 04:14:32 -- common/autotest_common.sh@955 -- # kill 75002 00:11:20.578 04:14:32 -- common/autotest_common.sh@960 -- # wait 75002 00:11:20.578 04:14:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:20.578 04:14:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:20.578 04:14:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:20.578 04:14:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.578 04:14:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:20.578 04:14:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.578 04:14:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.578 04:14:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.578 04:14:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:20.578 00:11:20.578 real 0m24.963s 00:11:20.578 user 0m39.984s 00:11:20.578 sys 0m7.571s 00:11:20.578 04:14:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.578 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:11:20.578 ************************************ 00:11:20.578 END TEST nvmf_zcopy 00:11:20.578 ************************************ 00:11:20.578 04:14:32 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:20.578 04:14:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:20.578 04:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.578 04:14:32 -- common/autotest_common.sh@10 -- # set +x 00:11:20.578 ************************************ 00:11:20.578 START TEST nvmf_nmic 00:11:20.578 ************************************ 00:11:20.578 04:14:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:20.578 * Looking for test storage... 00:11:20.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.578 04:14:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:20.578 04:14:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:20.578 04:14:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:20.578 04:14:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:20.578 04:14:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:20.578 04:14:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:20.578 04:14:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:20.578 04:14:33 -- scripts/common.sh@335 -- # IFS=.-: 00:11:20.578 04:14:33 -- scripts/common.sh@335 -- # read -ra ver1 00:11:20.578 04:14:33 -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.578 04:14:33 -- scripts/common.sh@336 -- # read -ra ver2 00:11:20.578 04:14:33 -- scripts/common.sh@337 -- # local 'op=<' 00:11:20.578 04:14:33 -- scripts/common.sh@339 -- # ver1_l=2 00:11:20.578 04:14:33 -- scripts/common.sh@340 -- # ver2_l=1 00:11:20.578 04:14:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:20.578 04:14:33 -- scripts/common.sh@343 -- # case "$op" in 00:11:20.578 04:14:33 -- scripts/common.sh@344 -- # : 1 00:11:20.578 04:14:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:20.578 04:14:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.578 04:14:33 -- scripts/common.sh@364 -- # decimal 1 00:11:20.578 04:14:33 -- scripts/common.sh@352 -- # local d=1 00:11:20.578 04:14:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.578 04:14:33 -- scripts/common.sh@354 -- # echo 1 00:11:20.578 04:14:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:20.578 04:14:33 -- scripts/common.sh@365 -- # decimal 2 00:11:20.578 04:14:33 -- scripts/common.sh@352 -- # local d=2 00:11:20.578 04:14:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.578 04:14:33 -- scripts/common.sh@354 -- # echo 2 00:11:20.578 04:14:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:20.578 04:14:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:20.578 04:14:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:20.578 04:14:33 -- scripts/common.sh@367 -- # return 0 00:11:20.578 04:14:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.578 04:14:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.578 --rc genhtml_branch_coverage=1 00:11:20.578 --rc genhtml_function_coverage=1 00:11:20.578 --rc genhtml_legend=1 00:11:20.578 --rc geninfo_all_blocks=1 00:11:20.578 --rc geninfo_unexecuted_blocks=1 00:11:20.578 00:11:20.578 ' 00:11:20.578 04:14:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.578 --rc genhtml_branch_coverage=1 00:11:20.578 --rc genhtml_function_coverage=1 00:11:20.578 --rc genhtml_legend=1 00:11:20.578 --rc geninfo_all_blocks=1 00:11:20.578 --rc geninfo_unexecuted_blocks=1 00:11:20.578 00:11:20.578 ' 00:11:20.578 04:14:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.578 --rc genhtml_branch_coverage=1 00:11:20.578 --rc genhtml_function_coverage=1 00:11:20.578 --rc genhtml_legend=1 00:11:20.578 --rc geninfo_all_blocks=1 00:11:20.578 --rc geninfo_unexecuted_blocks=1 00:11:20.578 00:11:20.578 ' 00:11:20.578 04:14:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:20.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.578 --rc genhtml_branch_coverage=1 00:11:20.578 --rc genhtml_function_coverage=1 00:11:20.578 --rc genhtml_legend=1 00:11:20.578 --rc geninfo_all_blocks=1 00:11:20.578 --rc geninfo_unexecuted_blocks=1 00:11:20.578 00:11:20.578 ' 00:11:20.578 04:14:33 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.578 04:14:33 -- nvmf/common.sh@7 -- # uname -s 00:11:20.578 04:14:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.578 04:14:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.578 04:14:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.578 04:14:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.578 04:14:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.578 04:14:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.578 04:14:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.578 04:14:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.578 04:14:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.578 04:14:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:20.578 04:14:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:20.578 04:14:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.578 04:14:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.578 04:14:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:20.578 04:14:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.578 04:14:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.578 04:14:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.578 04:14:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.578 04:14:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.578 04:14:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.578 04:14:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.578 04:14:33 -- paths/export.sh@5 -- # export PATH 00:11:20.578 04:14:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.578 04:14:33 -- nvmf/common.sh@46 -- # : 0 00:11:20.578 04:14:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:20.578 04:14:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:20.578 04:14:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:20.578 04:14:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.578 04:14:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.578 04:14:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:20.578 04:14:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:20.578 04:14:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:20.578 04:14:33 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.578 04:14:33 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.578 04:14:33 -- target/nmic.sh@14 -- # nvmftestinit 00:11:20.578 04:14:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:20.578 04:14:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.578 04:14:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:20.578 04:14:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:20.578 04:14:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:20.578 04:14:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.578 04:14:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.578 04:14:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.578 04:14:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:20.578 04:14:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:20.578 04:14:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:20.578 04:14:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.578 04:14:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:20.578 04:14:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:20.578 04:14:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:20.578 04:14:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:20.578 04:14:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:20.579 04:14:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.579 04:14:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:20.579 04:14:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:20.579 04:14:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:20.579 04:14:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:20.579 04:14:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:20.579 04:14:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:20.579 Cannot find device "nvmf_tgt_br" 00:11:20.579 04:14:33 -- nvmf/common.sh@154 -- # true 00:11:20.579 04:14:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:20.579 Cannot find device "nvmf_tgt_br2" 00:11:20.579 04:14:33 -- nvmf/common.sh@155 -- # true 00:11:20.579 04:14:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:20.579 04:14:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:20.579 Cannot find device "nvmf_tgt_br" 00:11:20.579 04:14:33 -- nvmf/common.sh@157 -- # true 00:11:20.579 04:14:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:20.838 Cannot find device "nvmf_tgt_br2" 00:11:20.838 04:14:33 -- nvmf/common.sh@158 -- # true 00:11:20.838 04:14:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:20.838 04:14:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:20.838 04:14:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:20.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.838 04:14:33 -- nvmf/common.sh@161 -- # true 00:11:20.838 04:14:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:20.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:20.838 04:14:33 -- nvmf/common.sh@162 -- # true 00:11:20.838 04:14:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:20.838 04:14:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:20.838 04:14:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:20.838 04:14:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:20.838 04:14:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:20.838 04:14:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:20.838 04:14:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:20.838 04:14:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:20.838 04:14:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:20.838 04:14:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:20.838 04:14:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:20.838 04:14:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:20.838 04:14:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:20.838 04:14:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:20.838 04:14:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:20.838 04:14:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:20.838 04:14:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:20.838 04:14:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:20.838 04:14:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:20.838 04:14:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.097 04:14:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.097 04:14:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.097 04:14:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.097 04:14:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:21.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:11:21.097 00:11:21.097 --- 10.0.0.2 ping statistics --- 00:11:21.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.097 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:21.097 04:14:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:21.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:21.097 00:11:21.097 --- 10.0.0.3 ping statistics --- 00:11:21.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.097 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:21.097 04:14:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:21.097 00:11:21.097 --- 10.0.0.1 ping statistics --- 00:11:21.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.097 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:21.097 04:14:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.097 04:14:33 -- nvmf/common.sh@421 -- # return 0 00:11:21.097 04:14:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:21.097 04:14:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.097 04:14:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:21.097 04:14:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:21.097 04:14:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.097 04:14:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:21.097 04:14:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:21.097 04:14:33 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:21.097 04:14:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:21.097 04:14:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:21.097 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:11:21.097 04:14:33 -- nvmf/common.sh@469 -- # nvmfpid=75484 00:11:21.097 04:14:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.097 04:14:33 -- nvmf/common.sh@470 -- # waitforlisten 75484 00:11:21.097 04:14:33 -- common/autotest_common.sh@829 -- # '[' -z 75484 ']' 00:11:21.097 04:14:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.097 04:14:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:21.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.097 04:14:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.097 04:14:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:21.097 04:14:33 -- common/autotest_common.sh@10 -- # set +x 00:11:21.097 [2024-12-06 04:14:33.530633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:21.097 [2024-12-06 04:14:33.530735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.357 [2024-12-06 04:14:33.672291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.357 [2024-12-06 04:14:33.760108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:21.357 [2024-12-06 04:14:33.760255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.357 [2024-12-06 04:14:33.760268] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.357 [2024-12-06 04:14:33.760276] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.357 [2024-12-06 04:14:33.760437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.357 [2024-12-06 04:14:33.760637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.357 [2024-12-06 04:14:33.761332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.357 [2024-12-06 04:14:33.761370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.297 04:14:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.297 04:14:34 -- common/autotest_common.sh@862 -- # return 0 00:11:22.297 04:14:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:22.297 04:14:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 04:14:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.297 04:14:34 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 [2024-12-06 04:14:34.587757] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 Malloc0 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 [2024-12-06 04:14:34.672123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 test case1: single bdev can't be used in multiple subsystems 00:11:22.297 04:14:34 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:22.297 04:14:34 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@28 -- # nmic_status=0 00:11:22.297 04:14:34 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 [2024-12-06 04:14:34.695860] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:22.297 [2024-12-06 04:14:34.695906] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:22.297 [2024-12-06 04:14:34.695917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.297 request: 00:11:22.297 { 00:11:22.297 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:22.297 "namespace": { 00:11:22.297 "bdev_name": "Malloc0" 00:11:22.297 }, 00:11:22.297 "method": "nvmf_subsystem_add_ns", 00:11:22.297 "req_id": 1 00:11:22.297 } 00:11:22.297 Got JSON-RPC error response 00:11:22.297 response: 00:11:22.297 { 00:11:22.297 "code": -32602, 00:11:22.297 "message": "Invalid parameters" 00:11:22.297 } 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@29 -- # nmic_status=1 00:11:22.297 04:14:34 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:22.297 Adding namespace failed - expected result. 00:11:22.297 04:14:34 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:22.297 test case2: host connect to nvmf target in multiple paths 00:11:22.297 04:14:34 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:22.297 04:14:34 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:22.297 04:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.297 04:14:34 -- common/autotest_common.sh@10 -- # set +x 00:11:22.297 [2024-12-06 04:14:34.707933] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:22.297 04:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.297 04:14:34 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.297 04:14:34 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:22.557 04:14:34 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.557 04:14:34 -- common/autotest_common.sh@1187 -- # local i=0 00:11:22.557 04:14:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.557 04:14:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:22.557 04:14:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:24.458 04:14:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:24.458 04:14:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:24.458 04:14:36 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.458 04:14:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:24.458 04:14:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.458 04:14:37 -- common/autotest_common.sh@1197 -- # return 0 00:11:24.458 04:14:37 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:24.716 [global] 00:11:24.716 thread=1 00:11:24.716 invalidate=1 00:11:24.716 rw=write 00:11:24.716 time_based=1 00:11:24.716 runtime=1 00:11:24.716 ioengine=libaio 00:11:24.716 direct=1 00:11:24.716 bs=4096 00:11:24.716 iodepth=1 00:11:24.716 norandommap=0 00:11:24.716 numjobs=1 00:11:24.716 00:11:24.716 verify_dump=1 00:11:24.716 verify_backlog=512 00:11:24.716 verify_state_save=0 00:11:24.716 do_verify=1 00:11:24.716 verify=crc32c-intel 00:11:24.716 [job0] 00:11:24.716 filename=/dev/nvme0n1 00:11:24.716 Could not set queue depth (nvme0n1) 00:11:24.716 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:24.716 fio-3.35 00:11:24.716 Starting 1 thread 00:11:26.093 00:11:26.093 job0: (groupid=0, jobs=1): err= 0: pid=75581: Fri Dec 6 04:14:38 2024 00:11:26.093 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:26.093 slat (nsec): min=11064, max=83167, avg=15935.70, stdev=7148.49 00:11:26.093 clat (usec): min=152, max=391, avg=251.14, stdev=42.13 00:11:26.093 lat (usec): min=166, max=412, avg=267.07, stdev=42.52 00:11:26.093 clat percentiles (usec): 00:11:26.093 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 215], 00:11:26.093 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 265], 00:11:26.093 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 322], 00:11:26.093 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 383], 99.95th=[ 383], 00:11:26.093 | 99.99th=[ 392] 00:11:26.093 write: IOPS=2441, BW=9766KiB/s (10.0MB/s)(9776KiB/1001msec); 0 zone resets 00:11:26.093 slat (usec): min=14, max=107, avg=27.74, stdev=13.85 00:11:26.093 clat (usec): min=89, max=627, avg=154.28, stdev=35.56 00:11:26.093 lat (usec): min=108, max=649, avg=182.03, stdev=38.35 00:11:26.093 clat percentiles (usec): 00:11:26.093 | 1.00th=[ 101], 5.00th=[ 109], 10.00th=[ 115], 20.00th=[ 124], 00:11:26.093 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 149], 60.00th=[ 159], 00:11:26.093 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 202], 95.00th=[ 217], 00:11:26.093 | 99.00th=[ 243], 99.50th=[ 262], 99.90th=[ 322], 99.95th=[ 375], 00:11:26.093 | 99.99th=[ 627] 00:11:26.093 bw ( KiB/s): min= 9640, max= 9640, per=98.71%, avg=9640.00, stdev= 0.00, samples=1 00:11:26.093 iops : min= 2410, max= 2410, avg=2410.00, stdev= 0.00, samples=1 00:11:26.093 lat (usec) : 100=0.51%, 250=75.42%, 500=24.04%, 750=0.02% 00:11:26.093 cpu : usr=1.80%, sys=7.50%, ctx=4492, majf=0, minf=5 00:11:26.093 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.093 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.093 issued rwts: total=2048,2444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.093 00:11:26.093 Run status group 0 (all jobs): 00:11:26.093 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:11:26.093 WRITE: bw=9766KiB/s (10.0MB/s), 9766KiB/s-9766KiB/s (10.0MB/s-10.0MB/s), io=9776KiB (10.0MB), run=1001-1001msec 00:11:26.093 00:11:26.093 Disk stats (read/write): 00:11:26.093 nvme0n1: ios=1987/2048, merge=0/0, ticks=531/356, in_queue=887, util=91.37% 00:11:26.093 04:14:38 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:26.093 04:14:38 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.093 04:14:38 -- common/autotest_common.sh@1208 -- # local i=0 00:11:26.093 04:14:38 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:26.093 04:14:38 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.093 04:14:38 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:26.093 04:14:38 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.093 04:14:38 -- common/autotest_common.sh@1220 -- # return 0 00:11:26.093 04:14:38 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:26.093 04:14:38 -- target/nmic.sh@53 -- # nvmftestfini 00:11:26.093 04:14:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:26.093 04:14:38 -- nvmf/common.sh@116 -- # sync 00:11:26.093 04:14:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:26.093 04:14:38 -- nvmf/common.sh@119 -- # set +e 00:11:26.093 04:14:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:26.093 04:14:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:26.093 rmmod nvme_tcp 00:11:26.093 rmmod nvme_fabrics 00:11:26.093 rmmod nvme_keyring 00:11:26.093 04:14:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:26.093 04:14:38 -- nvmf/common.sh@123 -- # set -e 00:11:26.093 04:14:38 -- nvmf/common.sh@124 -- # return 0 00:11:26.093 04:14:38 -- nvmf/common.sh@477 -- # '[' -n 75484 ']' 00:11:26.093 04:14:38 -- nvmf/common.sh@478 -- # killprocess 75484 00:11:26.093 04:14:38 -- common/autotest_common.sh@936 -- # '[' -z 75484 ']' 00:11:26.093 04:14:38 -- common/autotest_common.sh@940 -- # kill -0 75484 00:11:26.093 04:14:38 -- common/autotest_common.sh@941 -- # uname 00:11:26.093 04:14:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.093 04:14:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75484 00:11:26.093 killing process with pid 75484 00:11:26.093 04:14:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.093 04:14:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.093 04:14:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75484' 00:11:26.093 04:14:38 -- common/autotest_common.sh@955 -- # kill 75484 00:11:26.093 04:14:38 -- common/autotest_common.sh@960 -- # wait 75484 00:11:26.351 04:14:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:26.351 04:14:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:26.351 04:14:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:26.351 04:14:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.351 04:14:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:26.351 04:14:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.351 04:14:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.351 04:14:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.610 04:14:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:26.610 ************************************ 00:11:26.610 END TEST nvmf_nmic 00:11:26.610 ************************************ 00:11:26.610 00:11:26.610 real 0m6.082s 00:11:26.610 user 0m19.452s 00:11:26.610 sys 0m1.981s 00:11:26.610 04:14:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:26.610 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 04:14:38 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:26.610 04:14:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:26.610 04:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:26.610 04:14:38 -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 ************************************ 00:11:26.610 START TEST nvmf_fio_target 00:11:26.610 ************************************ 00:11:26.610 04:14:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:26.610 * Looking for test storage... 00:11:26.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.610 04:14:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:26.610 04:14:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:26.610 04:14:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:26.610 04:14:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:26.610 04:14:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:26.610 04:14:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:26.610 04:14:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:26.610 04:14:39 -- scripts/common.sh@335 -- # IFS=.-: 00:11:26.610 04:14:39 -- scripts/common.sh@335 -- # read -ra ver1 00:11:26.610 04:14:39 -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.610 04:14:39 -- scripts/common.sh@336 -- # read -ra ver2 00:11:26.610 04:14:39 -- scripts/common.sh@337 -- # local 'op=<' 00:11:26.610 04:14:39 -- scripts/common.sh@339 -- # ver1_l=2 00:11:26.610 04:14:39 -- scripts/common.sh@340 -- # ver2_l=1 00:11:26.610 04:14:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:26.610 04:14:39 -- scripts/common.sh@343 -- # case "$op" in 00:11:26.610 04:14:39 -- scripts/common.sh@344 -- # : 1 00:11:26.610 04:14:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:26.610 04:14:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.869 04:14:39 -- scripts/common.sh@364 -- # decimal 1 00:11:26.869 04:14:39 -- scripts/common.sh@352 -- # local d=1 00:11:26.869 04:14:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.869 04:14:39 -- scripts/common.sh@354 -- # echo 1 00:11:26.869 04:14:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:26.869 04:14:39 -- scripts/common.sh@365 -- # decimal 2 00:11:26.869 04:14:39 -- scripts/common.sh@352 -- # local d=2 00:11:26.869 04:14:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.869 04:14:39 -- scripts/common.sh@354 -- # echo 2 00:11:26.869 04:14:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:26.869 04:14:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:26.869 04:14:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:26.869 04:14:39 -- scripts/common.sh@367 -- # return 0 00:11:26.869 04:14:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.869 04:14:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:26.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.869 --rc genhtml_branch_coverage=1 00:11:26.869 --rc genhtml_function_coverage=1 00:11:26.869 --rc genhtml_legend=1 00:11:26.869 --rc geninfo_all_blocks=1 00:11:26.869 --rc geninfo_unexecuted_blocks=1 00:11:26.869 00:11:26.869 ' 00:11:26.869 04:14:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:26.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.869 --rc genhtml_branch_coverage=1 00:11:26.869 --rc genhtml_function_coverage=1 00:11:26.869 --rc genhtml_legend=1 00:11:26.869 --rc geninfo_all_blocks=1 00:11:26.869 --rc geninfo_unexecuted_blocks=1 00:11:26.869 00:11:26.869 ' 00:11:26.869 04:14:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:26.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.869 --rc genhtml_branch_coverage=1 00:11:26.869 --rc genhtml_function_coverage=1 00:11:26.869 --rc genhtml_legend=1 00:11:26.869 --rc geninfo_all_blocks=1 00:11:26.869 --rc geninfo_unexecuted_blocks=1 00:11:26.869 00:11:26.869 ' 00:11:26.869 04:14:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:26.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.869 --rc genhtml_branch_coverage=1 00:11:26.869 --rc genhtml_function_coverage=1 00:11:26.869 --rc genhtml_legend=1 00:11:26.869 --rc geninfo_all_blocks=1 00:11:26.869 --rc geninfo_unexecuted_blocks=1 00:11:26.869 00:11:26.869 ' 00:11:26.869 04:14:39 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.869 04:14:39 -- nvmf/common.sh@7 -- # uname -s 00:11:26.869 04:14:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.869 04:14:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.869 04:14:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.869 04:14:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.869 04:14:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.869 04:14:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.869 04:14:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.869 04:14:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.869 04:14:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.869 04:14:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:26.869 04:14:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:26.869 04:14:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.869 04:14:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.869 04:14:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.869 04:14:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.869 04:14:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.869 04:14:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.869 04:14:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.869 04:14:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.869 04:14:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.869 04:14:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.869 04:14:39 -- paths/export.sh@5 -- # export PATH 00:11:26.869 04:14:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.869 04:14:39 -- nvmf/common.sh@46 -- # : 0 00:11:26.869 04:14:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:26.869 04:14:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:26.869 04:14:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:26.869 04:14:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.869 04:14:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.869 04:14:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:26.869 04:14:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:26.869 04:14:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:26.869 04:14:39 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.869 04:14:39 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.869 04:14:39 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:26.869 04:14:39 -- target/fio.sh@16 -- # nvmftestinit 00:11:26.869 04:14:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:26.869 04:14:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.869 04:14:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:26.869 04:14:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:26.869 04:14:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:26.869 04:14:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.869 04:14:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.869 04:14:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.869 04:14:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:26.869 04:14:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:26.869 04:14:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.869 04:14:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.869 04:14:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:26.869 04:14:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:26.869 04:14:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.869 04:14:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.869 04:14:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.869 04:14:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.869 04:14:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.869 04:14:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.869 04:14:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.869 04:14:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.869 04:14:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:26.869 04:14:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:26.869 Cannot find device "nvmf_tgt_br" 00:11:26.869 04:14:39 -- nvmf/common.sh@154 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.869 Cannot find device "nvmf_tgt_br2" 00:11:26.869 04:14:39 -- nvmf/common.sh@155 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:26.869 04:14:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:26.869 Cannot find device "nvmf_tgt_br" 00:11:26.869 04:14:39 -- nvmf/common.sh@157 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:26.869 Cannot find device "nvmf_tgt_br2" 00:11:26.869 04:14:39 -- nvmf/common.sh@158 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:26.869 04:14:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:26.869 04:14:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.869 04:14:39 -- nvmf/common.sh@161 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.869 04:14:39 -- nvmf/common.sh@162 -- # true 00:11:26.869 04:14:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.869 04:14:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.869 04:14:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.869 04:14:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.870 04:14:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.870 04:14:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.127 04:14:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.127 04:14:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.127 04:14:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.127 04:14:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:27.127 04:14:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:27.127 04:14:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:27.127 04:14:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:27.127 04:14:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.127 04:14:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.127 04:14:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.127 04:14:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:27.127 04:14:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:27.127 04:14:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.127 04:14:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.127 04:14:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.128 04:14:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.128 04:14:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.128 04:14:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:27.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:27.128 00:11:27.128 --- 10.0.0.2 ping statistics --- 00:11:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.128 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:27.128 04:14:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:27.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:27.128 00:11:27.128 --- 10.0.0.3 ping statistics --- 00:11:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.128 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:27.128 04:14:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:11:27.128 00:11:27.128 --- 10.0.0.1 ping statistics --- 00:11:27.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.128 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:27.128 04:14:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.128 04:14:39 -- nvmf/common.sh@421 -- # return 0 00:11:27.128 04:14:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:27.128 04:14:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.128 04:14:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:27.128 04:14:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:27.128 04:14:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.128 04:14:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:27.128 04:14:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:27.128 04:14:39 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:27.128 04:14:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:27.128 04:14:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.128 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 04:14:39 -- nvmf/common.sh@469 -- # nvmfpid=75766 00:11:27.128 04:14:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.128 04:14:39 -- nvmf/common.sh@470 -- # waitforlisten 75766 00:11:27.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.128 04:14:39 -- common/autotest_common.sh@829 -- # '[' -z 75766 ']' 00:11:27.128 04:14:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.128 04:14:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.128 04:14:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.128 04:14:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.128 04:14:39 -- common/autotest_common.sh@10 -- # set +x 00:11:27.128 [2024-12-06 04:14:39.642544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:27.128 [2024-12-06 04:14:39.642652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.386 [2024-12-06 04:14:39.779208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.386 [2024-12-06 04:14:39.872871] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:27.386 [2024-12-06 04:14:39.873014] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.386 [2024-12-06 04:14:39.873027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.386 [2024-12-06 04:14:39.873035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.386 [2024-12-06 04:14:39.873200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.386 [2024-12-06 04:14:39.873347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.386 [2024-12-06 04:14:39.874060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.386 [2024-12-06 04:14:39.874133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.322 04:14:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.322 04:14:40 -- common/autotest_common.sh@862 -- # return 0 00:11:28.322 04:14:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:28.322 04:14:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.322 04:14:40 -- common/autotest_common.sh@10 -- # set +x 00:11:28.322 04:14:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.322 04:14:40 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:28.581 [2024-12-06 04:14:40.909732] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.581 04:14:40 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:28.840 04:14:41 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:28.840 04:14:41 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.098 04:14:41 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:29.098 04:14:41 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.356 04:14:41 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:29.356 04:14:41 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.615 04:14:42 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:29.615 04:14:42 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:29.874 04:14:42 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.134 04:14:42 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:30.134 04:14:42 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.392 04:14:42 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:30.392 04:14:42 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.651 04:14:43 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:30.651 04:14:43 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:30.910 04:14:43 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.169 04:14:43 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:31.169 04:14:43 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.427 04:14:43 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:31.427 04:14:43 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.686 04:14:44 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.946 [2024-12-06 04:14:44.427080] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.946 04:14:44 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:32.204 04:14:44 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:32.463 04:14:44 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.727 04:14:45 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:32.727 04:14:45 -- common/autotest_common.sh@1187 -- # local i=0 00:11:32.727 04:14:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.727 04:14:45 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:32.727 04:14:45 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:32.727 04:14:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:34.629 04:14:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:34.629 04:14:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:34.629 04:14:47 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.629 04:14:47 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:11:34.629 04:14:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.629 04:14:47 -- common/autotest_common.sh@1197 -- # return 0 00:11:34.629 04:14:47 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:34.629 [global] 00:11:34.629 thread=1 00:11:34.629 invalidate=1 00:11:34.629 rw=write 00:11:34.629 time_based=1 00:11:34.629 runtime=1 00:11:34.629 ioengine=libaio 00:11:34.629 direct=1 00:11:34.629 bs=4096 00:11:34.629 iodepth=1 00:11:34.629 norandommap=0 00:11:34.629 numjobs=1 00:11:34.629 00:11:34.629 verify_dump=1 00:11:34.629 verify_backlog=512 00:11:34.629 verify_state_save=0 00:11:34.629 do_verify=1 00:11:34.629 verify=crc32c-intel 00:11:34.629 [job0] 00:11:34.629 filename=/dev/nvme0n1 00:11:34.629 [job1] 00:11:34.629 filename=/dev/nvme0n2 00:11:34.629 [job2] 00:11:34.629 filename=/dev/nvme0n3 00:11:34.629 [job3] 00:11:34.629 filename=/dev/nvme0n4 00:11:34.629 Could not set queue depth (nvme0n1) 00:11:34.629 Could not set queue depth (nvme0n2) 00:11:34.629 Could not set queue depth (nvme0n3) 00:11:34.629 Could not set queue depth (nvme0n4) 00:11:34.888 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.888 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.888 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.888 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:34.888 fio-3.35 00:11:34.889 Starting 4 threads 00:11:36.268 00:11:36.268 job0: (groupid=0, jobs=1): err= 0: pid=75956: Fri Dec 6 04:14:48 2024 00:11:36.268 read: IOPS=2232, BW=8931KiB/s (9145kB/s)(8940KiB/1001msec) 00:11:36.268 slat (nsec): min=11802, max=73276, avg=15992.02, stdev=5998.97 00:11:36.268 clat (usec): min=156, max=536, avg=219.64, stdev=31.74 00:11:36.268 lat (usec): min=169, max=548, avg=235.63, stdev=32.53 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:11:36.268 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 227], 00:11:36.268 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 269], 00:11:36.268 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 351], 99.95th=[ 429], 00:11:36.268 | 99.99th=[ 537] 00:11:36.268 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:36.268 slat (usec): min=14, max=108, avg=24.73, stdev= 8.74 00:11:36.268 clat (usec): min=101, max=496, avg=157.19, stdev=29.93 00:11:36.268 lat (usec): min=121, max=515, avg=181.92, stdev=31.47 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 131], 00:11:36.268 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 153], 60.00th=[ 161], 00:11:36.268 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 212], 00:11:36.268 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 297], 00:11:36.268 | 99.99th=[ 498] 00:11:36.268 bw ( KiB/s): min=10800, max=10800, per=30.65%, avg=10800.00, stdev= 0.00, samples=1 00:11:36.268 iops : min= 2700, max= 2700, avg=2700.00, stdev= 0.00, samples=1 00:11:36.268 lat (usec) : 250=91.41%, 500=8.57%, 750=0.02% 00:11:36.268 cpu : usr=1.90%, sys=7.40%, ctx=4796, majf=0, minf=17 00:11:36.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 issued rwts: total=2235,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.268 job1: (groupid=0, jobs=1): err= 0: pid=75957: Fri Dec 6 04:14:48 2024 00:11:36.268 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:36.268 slat (nsec): min=10671, max=63164, avg=14540.63, stdev=5899.92 00:11:36.268 clat (usec): min=129, max=291, avg=195.30, stdev=28.19 00:11:36.268 lat (usec): min=141, max=318, avg=209.84, stdev=28.96 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:11:36.268 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 202], 00:11:36.268 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:11:36.268 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 289], 00:11:36.268 | 99.99th=[ 293] 00:11:36.268 write: IOPS=2717, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:11:36.268 slat (nsec): min=15928, max=90586, avg=23462.05, stdev=8571.26 00:11:36.268 clat (usec): min=89, max=268, avg=143.70, stdev=27.87 00:11:36.268 lat (usec): min=106, max=334, avg=167.17, stdev=29.61 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 98], 5.00th=[ 106], 10.00th=[ 113], 20.00th=[ 120], 00:11:36.268 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], 00:11:36.268 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 196], 00:11:36.268 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 251], 99.95th=[ 258], 00:11:36.268 | 99.99th=[ 269] 00:11:36.268 bw ( KiB/s): min=12288, max=12288, per=34.87%, avg=12288.00, stdev= 0.00, samples=1 00:11:36.268 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:36.268 lat (usec) : 100=0.91%, 250=97.44%, 500=1.65% 00:11:36.268 cpu : usr=1.90%, sys=7.80%, ctx=5280, majf=0, minf=1 00:11:36.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 issued rwts: total=2560,2720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.268 job2: (groupid=0, jobs=1): err= 0: pid=75958: Fri Dec 6 04:14:48 2024 00:11:36.268 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:36.268 slat (nsec): min=8169, max=85209, avg=15981.95, stdev=7292.92 00:11:36.268 clat (usec): min=227, max=469, avg=311.12, stdev=33.69 00:11:36.268 lat (usec): min=239, max=478, avg=327.10, stdev=34.26 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 241], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 281], 00:11:36.268 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:11:36.268 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 363], 00:11:36.268 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 445], 99.95th=[ 469], 00:11:36.268 | 99.99th=[ 469] 00:11:36.268 write: IOPS=1768, BW=7073KiB/s (7243kB/s)(7080KiB/1001msec); 0 zone resets 00:11:36.268 slat (usec): min=10, max=208, avg=23.52, stdev= 9.52 00:11:36.268 clat (usec): min=138, max=1152, avg=254.69, stdev=41.11 00:11:36.268 lat (usec): min=189, max=1179, avg=278.20, stdev=41.60 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 225], 00:11:36.268 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 262], 00:11:36.268 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:11:36.268 | 99.00th=[ 351], 99.50th=[ 396], 99.90th=[ 553], 99.95th=[ 1156], 00:11:36.268 | 99.99th=[ 1156] 00:11:36.268 bw ( KiB/s): min= 8192, max= 8192, per=23.25%, avg=8192.00, stdev= 0.00, samples=1 00:11:36.268 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:36.268 lat (usec) : 250=26.62%, 500=73.32%, 750=0.03% 00:11:36.268 lat (msec) : 2=0.03% 00:11:36.268 cpu : usr=2.10%, sys=4.70%, ctx=3307, majf=0, minf=7 00:11:36.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.268 issued rwts: total=1536,1770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.268 job3: (groupid=0, jobs=1): err= 0: pid=75959: Fri Dec 6 04:14:48 2024 00:11:36.268 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:36.268 slat (nsec): min=8119, max=64161, avg=13720.70, stdev=6376.56 00:11:36.268 clat (usec): min=222, max=474, avg=313.50, stdev=34.29 00:11:36.268 lat (usec): min=236, max=500, avg=327.22, stdev=34.37 00:11:36.268 clat percentiles (usec): 00:11:36.268 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 281], 00:11:36.268 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:11:36.268 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 367], 00:11:36.268 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 449], 99.95th=[ 474], 00:11:36.268 | 99.99th=[ 474] 00:11:36.269 write: IOPS=1766, BW=7065KiB/s (7234kB/s)(7072KiB/1001msec); 0 zone resets 00:11:36.269 slat (usec): min=10, max=156, avg=22.97, stdev=10.31 00:11:36.269 clat (usec): min=163, max=1074, avg=255.69, stdev=40.61 00:11:36.269 lat (usec): min=182, max=1090, avg=278.65, stdev=41.71 00:11:36.269 clat percentiles (usec): 00:11:36.269 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 225], 00:11:36.269 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:11:36.269 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:11:36.269 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 644], 99.95th=[ 1074], 00:11:36.269 | 99.99th=[ 1074] 00:11:36.269 bw ( KiB/s): min= 8192, max= 8192, per=23.25%, avg=8192.00, stdev= 0.00, samples=1 00:11:36.269 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:36.269 lat (usec) : 250=25.82%, 500=74.12%, 750=0.03% 00:11:36.269 lat (msec) : 2=0.03% 00:11:36.269 cpu : usr=1.20%, sys=5.20%, ctx=3304, majf=0, minf=11 00:11:36.269 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:36.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:36.269 issued rwts: total=1536,1768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:36.269 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:36.269 00:11:36.269 Run status group 0 (all jobs): 00:11:36.269 READ: bw=30.7MiB/s (32.2MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:11:36.269 WRITE: bw=34.4MiB/s (36.1MB/s), 7065KiB/s-10.6MiB/s (7234kB/s-11.1MB/s), io=34.4MiB (36.1MB), run=1001-1001msec 00:11:36.269 00:11:36.269 Disk stats (read/write): 00:11:36.269 nvme0n1: ios=2063/2048, merge=0/0, ticks=468/348, in_queue=816, util=87.47% 00:11:36.269 nvme0n2: ios=2088/2511, merge=0/0, ticks=431/389, in_queue=820, util=88.34% 00:11:36.269 nvme0n3: ios=1298/1536, merge=0/0, ticks=407/376, in_queue=783, util=89.00% 00:11:36.269 nvme0n4: ios=1297/1536, merge=0/0, ticks=379/378, in_queue=757, util=89.66% 00:11:36.269 04:14:48 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:36.269 [global] 00:11:36.269 thread=1 00:11:36.269 invalidate=1 00:11:36.269 rw=randwrite 00:11:36.269 time_based=1 00:11:36.269 runtime=1 00:11:36.269 ioengine=libaio 00:11:36.269 direct=1 00:11:36.269 bs=4096 00:11:36.269 iodepth=1 00:11:36.269 norandommap=0 00:11:36.269 numjobs=1 00:11:36.269 00:11:36.269 verify_dump=1 00:11:36.269 verify_backlog=512 00:11:36.269 verify_state_save=0 00:11:36.269 do_verify=1 00:11:36.269 verify=crc32c-intel 00:11:36.269 [job0] 00:11:36.269 filename=/dev/nvme0n1 00:11:36.269 [job1] 00:11:36.269 filename=/dev/nvme0n2 00:11:36.269 [job2] 00:11:36.269 filename=/dev/nvme0n3 00:11:36.269 [job3] 00:11:36.269 filename=/dev/nvme0n4 00:11:36.269 Could not set queue depth (nvme0n1) 00:11:36.269 Could not set queue depth (nvme0n2) 00:11:36.269 Could not set queue depth (nvme0n3) 00:11:36.269 Could not set queue depth (nvme0n4) 00:11:36.269 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.269 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.269 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.269 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:36.269 fio-3.35 00:11:36.269 Starting 4 threads 00:11:37.648 00:11:37.648 job0: (groupid=0, jobs=1): err= 0: pid=76014: Fri Dec 6 04:14:49 2024 00:11:37.648 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:11:37.648 slat (nsec): min=11453, max=65160, avg=21938.74, stdev=7498.56 00:11:37.648 clat (usec): min=247, max=8008, avg=420.95, stdev=250.82 00:11:37.648 lat (usec): min=273, max=8040, avg=442.89, stdev=250.86 00:11:37.648 clat percentiles (usec): 00:11:37.648 | 1.00th=[ 269], 5.00th=[ 310], 10.00th=[ 326], 20.00th=[ 351], 00:11:37.648 | 30.00th=[ 371], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 424], 00:11:37.648 | 70.00th=[ 445], 80.00th=[ 474], 90.00th=[ 506], 95.00th=[ 537], 00:11:37.649 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 1631], 99.95th=[ 8029], 00:11:37.649 | 99.99th=[ 8029] 00:11:37.649 write: IOPS=1392, BW=5570KiB/s (5704kB/s)(5576KiB/1001msec); 0 zone resets 00:11:37.649 slat (usec): min=16, max=130, avg=32.05, stdev=12.07 00:11:37.649 clat (usec): min=132, max=1511, avg=355.11, stdev=88.94 00:11:37.649 lat (usec): min=164, max=1547, avg=387.15, stdev=85.90 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 182], 5.00th=[ 227], 10.00th=[ 247], 20.00th=[ 281], 00:11:37.649 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 375], 00:11:37.649 | 70.00th=[ 404], 80.00th=[ 437], 90.00th=[ 469], 95.00th=[ 490], 00:11:37.649 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 1516], 00:11:37.649 | 99.99th=[ 1516] 00:11:37.649 bw ( KiB/s): min= 5968, max= 5968, per=19.81%, avg=5968.00, stdev= 0.00, samples=1 00:11:37.649 iops : min= 1492, max= 1492, avg=1492.00, stdev= 0.00, samples=1 00:11:37.649 lat (usec) : 250=6.12%, 500=86.56%, 750=7.11%, 1000=0.08% 00:11:37.649 lat (msec) : 2=0.08%, 10=0.04% 00:11:37.649 cpu : usr=1.30%, sys=6.00%, ctx=2419, majf=0, minf=11 00:11:37.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 issued rwts: total=1024,1394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.649 job1: (groupid=0, jobs=1): err= 0: pid=76015: Fri Dec 6 04:14:49 2024 00:11:37.649 read: IOPS=2205, BW=8823KiB/s (9035kB/s)(8832KiB/1001msec) 00:11:37.649 slat (nsec): min=11941, max=80231, avg=15890.06, stdev=6591.45 00:11:37.649 clat (usec): min=126, max=4331, avg=215.31, stdev=125.07 00:11:37.649 lat (usec): min=139, max=4352, avg=231.20, stdev=125.49 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 167], 20.00th=[ 178], 00:11:37.649 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 217], 00:11:37.649 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 273], 00:11:37.649 | 99.00th=[ 322], 99.50th=[ 392], 99.90th=[ 1827], 99.95th=[ 3064], 00:11:37.649 | 99.99th=[ 4359] 00:11:37.649 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:37.649 slat (usec): min=17, max=101, avg=23.89, stdev= 9.18 00:11:37.649 clat (usec): min=90, max=389, avg=164.34, stdev=34.88 00:11:37.649 lat (usec): min=110, max=471, avg=188.23, stdev=35.85 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 101], 5.00th=[ 115], 10.00th=[ 122], 20.00th=[ 135], 00:11:37.649 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 169], 00:11:37.649 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 212], 95.00th=[ 227], 00:11:37.649 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 330], 00:11:37.649 | 99.99th=[ 392] 00:11:37.649 bw ( KiB/s): min=11088, max=11088, per=36.81%, avg=11088.00, stdev= 0.00, samples=1 00:11:37.649 iops : min= 2772, max= 2772, avg=2772.00, stdev= 0.00, samples=1 00:11:37.649 lat (usec) : 100=0.46%, 250=92.26%, 500=7.09%, 750=0.04%, 1000=0.04% 00:11:37.649 lat (msec) : 2=0.06%, 4=0.02%, 10=0.02% 00:11:37.649 cpu : usr=2.00%, sys=7.10%, ctx=4768, majf=0, minf=9 00:11:37.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 issued rwts: total=2208,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.649 job2: (groupid=0, jobs=1): err= 0: pid=76016: Fri Dec 6 04:14:49 2024 00:11:37.649 read: IOPS=1888, BW=7552KiB/s (7734kB/s)(7560KiB/1001msec) 00:11:37.649 slat (nsec): min=11289, max=73991, avg=16349.34, stdev=7118.13 00:11:37.649 clat (usec): min=156, max=1497, avg=248.79, stdev=59.64 00:11:37.649 lat (usec): min=169, max=1521, avg=265.14, stdev=60.52 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 204], 00:11:37.649 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 251], 00:11:37.649 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 347], 00:11:37.649 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 725], 99.95th=[ 1500], 00:11:37.649 | 99.99th=[ 1500] 00:11:37.649 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:37.649 slat (usec): min=14, max=129, avg=27.84, stdev=12.32 00:11:37.649 clat (usec): min=110, max=608, avg=212.06, stdev=69.09 00:11:37.649 lat (usec): min=128, max=655, avg=239.90, stdev=75.46 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 123], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 159], 00:11:37.649 | 30.00th=[ 169], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 210], 00:11:37.649 | 70.00th=[ 231], 80.00th=[ 260], 90.00th=[ 302], 95.00th=[ 338], 00:11:37.649 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 545], 99.95th=[ 545], 00:11:37.649 | 99.99th=[ 611] 00:11:37.649 bw ( KiB/s): min= 8192, max= 8192, per=27.20%, avg=8192.00, stdev= 0.00, samples=1 00:11:37.649 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:37.649 lat (usec) : 250=68.66%, 500=31.01%, 750=0.30% 00:11:37.649 lat (msec) : 2=0.03% 00:11:37.649 cpu : usr=2.20%, sys=6.30%, ctx=3941, majf=0, minf=13 00:11:37.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 issued rwts: total=1890,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.649 job3: (groupid=0, jobs=1): err= 0: pid=76017: Fri Dec 6 04:14:49 2024 00:11:37.649 read: IOPS=1025, BW=4104KiB/s (4202kB/s)(4108KiB/1001msec) 00:11:37.649 slat (nsec): min=12615, max=82629, avg=19336.73, stdev=6387.90 00:11:37.649 clat (usec): min=197, max=2051, avg=389.85, stdev=100.88 00:11:37.649 lat (usec): min=214, max=2066, avg=409.19, stdev=101.28 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 223], 5.00th=[ 249], 10.00th=[ 269], 20.00th=[ 297], 00:11:37.649 | 30.00th=[ 334], 40.00th=[ 375], 50.00th=[ 400], 60.00th=[ 416], 00:11:37.649 | 70.00th=[ 437], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 523], 00:11:37.649 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 791], 99.95th=[ 2057], 00:11:37.649 | 99.99th=[ 2057] 00:11:37.649 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:37.649 slat (nsec): min=15846, max=98594, avg=36135.45, stdev=9615.11 00:11:37.649 clat (usec): min=172, max=761, avg=336.54, stdev=86.09 00:11:37.649 lat (usec): min=207, max=787, avg=372.67, stdev=87.78 00:11:37.649 clat percentiles (usec): 00:11:37.649 | 1.00th=[ 192], 5.00th=[ 210], 10.00th=[ 227], 20.00th=[ 253], 00:11:37.649 | 30.00th=[ 277], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 355], 00:11:37.649 | 70.00th=[ 383], 80.00th=[ 420], 90.00th=[ 453], 95.00th=[ 486], 00:11:37.649 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 758], 00:11:37.649 | 99.99th=[ 758] 00:11:37.649 bw ( KiB/s): min= 7120, max= 7120, per=23.64%, avg=7120.00, stdev= 0.00, samples=1 00:11:37.649 iops : min= 1780, max= 1780, avg=1780.00, stdev= 0.00, samples=1 00:11:37.649 lat (usec) : 250=13.62%, 500=80.84%, 750=5.42%, 1000=0.08% 00:11:37.649 lat (msec) : 4=0.04% 00:11:37.649 cpu : usr=1.50%, sys=6.70%, ctx=2566, majf=0, minf=13 00:11:37.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:37.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.649 issued rwts: total=1027,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:37.649 00:11:37.649 Run status group 0 (all jobs): 00:11:37.649 READ: bw=24.0MiB/s (25.2MB/s), 4092KiB/s-8823KiB/s (4190kB/s-9035kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:11:37.649 WRITE: bw=29.4MiB/s (30.8MB/s), 5570KiB/s-9.99MiB/s (5704kB/s-10.5MB/s), io=29.4MiB (30.9MB), run=1001-1001msec 00:11:37.649 00:11:37.649 Disk stats (read/write): 00:11:37.649 nvme0n1: ios=1074/1052, merge=0/0, ticks=455/358, in_queue=813, util=87.17% 00:11:37.649 nvme0n2: ios=2054/2048, merge=0/0, ticks=470/365, in_queue=835, util=88.25% 00:11:37.649 nvme0n3: ios=1536/1798, merge=0/0, ticks=401/400, in_queue=801, util=89.29% 00:11:37.649 nvme0n4: ios=1024/1197, merge=0/0, ticks=390/408, in_queue=798, util=89.66% 00:11:37.649 04:14:49 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:37.649 [global] 00:11:37.649 thread=1 00:11:37.649 invalidate=1 00:11:37.649 rw=write 00:11:37.649 time_based=1 00:11:37.649 runtime=1 00:11:37.649 ioengine=libaio 00:11:37.649 direct=1 00:11:37.649 bs=4096 00:11:37.649 iodepth=128 00:11:37.649 norandommap=0 00:11:37.649 numjobs=1 00:11:37.649 00:11:37.649 verify_dump=1 00:11:37.649 verify_backlog=512 00:11:37.649 verify_state_save=0 00:11:37.649 do_verify=1 00:11:37.649 verify=crc32c-intel 00:11:37.649 [job0] 00:11:37.649 filename=/dev/nvme0n1 00:11:37.649 [job1] 00:11:37.649 filename=/dev/nvme0n2 00:11:37.649 [job2] 00:11:37.649 filename=/dev/nvme0n3 00:11:37.649 [job3] 00:11:37.649 filename=/dev/nvme0n4 00:11:37.649 Could not set queue depth (nvme0n1) 00:11:37.649 Could not set queue depth (nvme0n2) 00:11:37.649 Could not set queue depth (nvme0n3) 00:11:37.649 Could not set queue depth (nvme0n4) 00:11:37.649 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.649 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.649 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.649 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:37.649 fio-3.35 00:11:37.649 Starting 4 threads 00:11:39.029 00:11:39.029 job0: (groupid=0, jobs=1): err= 0: pid=76072: Fri Dec 6 04:14:51 2024 00:11:39.029 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:11:39.029 slat (usec): min=7, max=4053, avg=112.20, stdev=539.40 00:11:39.029 clat (usec): min=10000, max=17148, avg=14999.48, stdev=885.99 00:11:39.029 lat (usec): min=12906, max=17162, avg=15111.68, stdev=706.19 00:11:39.029 clat percentiles (usec): 00:11:39.029 | 1.00th=[11600], 5.00th=[13435], 10.00th=[14222], 20.00th=[14484], 00:11:39.029 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15139], 60.00th=[15270], 00:11:39.029 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16188], 00:11:39.029 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:11:39.029 | 99.99th=[17171] 00:11:39.029 write: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1002msec); 0 zone resets 00:11:39.029 slat (usec): min=11, max=4432, avg=117.13, stdev=522.22 00:11:39.029 clat (usec): min=258, max=18520, avg=15039.80, stdev=1788.52 00:11:39.029 lat (usec): min=2848, max=18539, avg=15156.93, stdev=1718.19 00:11:39.029 clat percentiles (usec): 00:11:39.029 | 1.00th=[ 6521], 5.00th=[12780], 10.00th=[13566], 20.00th=[14222], 00:11:39.030 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:11:39.030 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16581], 95.00th=[17171], 00:11:39.030 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:11:39.030 | 99.99th=[18482] 00:11:39.030 bw ( KiB/s): min=16392, max=16392, per=35.95%, avg=16392.00, stdev= 0.00, samples=1 00:11:39.030 iops : min= 4098, max= 4098, avg=4098.00, stdev= 0.00, samples=1 00:11:39.030 lat (usec) : 500=0.01% 00:11:39.030 lat (msec) : 4=0.38%, 10=0.76%, 20=98.85% 00:11:39.030 cpu : usr=3.70%, sys=12.69%, ctx=264, majf=0, minf=9 00:11:39.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:39.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.030 issued rwts: total=4096,4321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.030 job1: (groupid=0, jobs=1): err= 0: pid=76073: Fri Dec 6 04:14:51 2024 00:11:39.030 read: IOPS=2008, BW=8036KiB/s (8229kB/s)(8068KiB/1004msec) 00:11:39.030 slat (usec): min=5, max=8038, avg=240.83, stdev=1241.51 00:11:39.030 clat (usec): min=2960, max=34752, avg=30305.48, stdev=3618.50 00:11:39.030 lat (usec): min=9524, max=34772, avg=30546.31, stdev=3398.35 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[ 9896], 5.00th=[24249], 10.00th=[28181], 20.00th=[30016], 00:11:39.030 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:11:39.030 | 70.00th=[31589], 80.00th=[32113], 90.00th=[32900], 95.00th=[33162], 00:11:39.030 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:11:39.030 | 99.99th=[34866] 00:11:39.030 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:11:39.030 slat (usec): min=10, max=8778, avg=242.86, stdev=1226.34 00:11:39.030 clat (usec): min=22618, max=36617, avg=31593.03, stdev=1776.13 00:11:39.030 lat (usec): min=25213, max=36641, avg=31835.89, stdev=1305.30 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30278], 20.00th=[30540], 00:11:39.030 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:11:39.030 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:11:39.030 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:11:39.030 | 99.99th=[36439] 00:11:39.030 bw ( KiB/s): min= 8175, max= 8208, per=17.96%, avg=8191.50, stdev=23.33, samples=2 00:11:39.030 iops : min= 2043, max= 2052, avg=2047.50, stdev= 6.36, samples=2 00:11:39.030 lat (msec) : 4=0.02%, 10=0.62%, 20=0.96%, 50=98.40% 00:11:39.030 cpu : usr=2.79%, sys=6.38%, ctx=129, majf=0, minf=11 00:11:39.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:39.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.030 issued rwts: total=2017,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.030 job2: (groupid=0, jobs=1): err= 0: pid=76078: Fri Dec 6 04:14:51 2024 00:11:39.030 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:11:39.030 slat (usec): min=5, max=8188, avg=237.16, stdev=1234.36 00:11:39.030 clat (usec): min=2714, max=34987, avg=29884.15, stdev=4917.53 00:11:39.030 lat (usec): min=2726, max=35004, avg=30121.31, stdev=4779.82 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[ 3130], 5.00th=[22938], 10.00th=[27919], 20.00th=[29754], 00:11:39.030 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30802], 60.00th=[31327], 00:11:39.030 | 70.00th=[31589], 80.00th=[31851], 90.00th=[32637], 95.00th=[33162], 00:11:39.030 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:11:39.030 | 99.99th=[34866] 00:11:39.030 write: IOPS=2044, BW=8180KiB/s (8376kB/s)(8196KiB/1002msec); 0 zone resets 00:11:39.030 slat (usec): min=11, max=8894, avg=243.69, stdev=1228.78 00:11:39.030 clat (usec): min=145, max=37031, avg=31502.70, stdev=1944.47 00:11:39.030 lat (usec): min=2708, max=37076, avg=31746.39, stdev=1506.42 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[23987], 5.00th=[29754], 10.00th=[30016], 20.00th=[30540], 00:11:39.030 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:11:39.030 | 70.00th=[32113], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:11:39.030 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:11:39.030 | 99.99th=[36963] 00:11:39.030 bw ( KiB/s): min= 8192, max= 8192, per=17.97%, avg=8192.00, stdev= 0.00, samples=2 00:11:39.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:39.030 lat (usec) : 250=0.02% 00:11:39.030 lat (msec) : 4=0.78%, 10=0.76%, 20=0.81%, 50=97.63% 00:11:39.030 cpu : usr=2.40%, sys=5.39%, ctx=129, majf=0, minf=17 00:11:39.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:11:39.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.030 issued rwts: total=2048,2049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.030 job3: (groupid=0, jobs=1): err= 0: pid=76079: Fri Dec 6 04:14:51 2024 00:11:39.030 read: IOPS=2889, BW=11.3MiB/s (11.8MB/s)(11.4MiB/1008msec) 00:11:39.030 slat (usec): min=7, max=6142, avg=161.05, stdev=798.81 00:11:39.030 clat (usec): min=2458, max=25493, avg=20891.11, stdev=2073.83 00:11:39.030 lat (usec): min=8600, max=25508, avg=21052.16, stdev=1913.25 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[ 9110], 5.00th=[17695], 10.00th=[19268], 20.00th=[19792], 00:11:39.030 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:11:39.030 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[23462], 00:11:39.030 | 99.00th=[24773], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:11:39.030 | 99.99th=[25560] 00:11:39.030 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:11:39.030 slat (usec): min=10, max=5658, avg=164.91, stdev=769.61 00:11:39.030 clat (usec): min=14683, max=23649, avg=21440.23, stdev=1207.57 00:11:39.030 lat (usec): min=17727, max=23982, avg=21605.15, stdev=942.04 00:11:39.030 clat percentiles (usec): 00:11:39.030 | 1.00th=[16909], 5.00th=[19792], 10.00th=[20317], 20.00th=[20841], 00:11:39.030 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:11:39.030 | 70.00th=[21890], 80.00th=[22414], 90.00th=[22938], 95.00th=[23200], 00:11:39.030 | 99.00th=[23462], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:11:39.030 | 99.99th=[23725] 00:11:39.030 bw ( KiB/s): min=12263, max=12312, per=26.95%, avg=12287.50, stdev=34.65, samples=2 00:11:39.030 iops : min= 3065, max= 3078, avg=3071.50, stdev= 9.19, samples=2 00:11:39.030 lat (msec) : 4=0.02%, 10=0.53%, 20=14.09%, 50=85.36% 00:11:39.030 cpu : usr=3.08%, sys=10.03%, ctx=188, majf=0, minf=15 00:11:39.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:39.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:39.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:39.030 issued rwts: total=2913,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:39.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:39.030 00:11:39.030 Run status group 0 (all jobs): 00:11:39.030 READ: bw=42.9MiB/s (45.0MB/s), 8036KiB/s-16.0MiB/s (8229kB/s-16.7MB/s), io=43.3MiB (45.4MB), run=1002-1008msec 00:11:39.030 WRITE: bw=44.5MiB/s (46.7MB/s), 8159KiB/s-16.8MiB/s (8355kB/s-17.7MB/s), io=44.9MiB (47.1MB), run=1002-1008msec 00:11:39.030 00:11:39.030 Disk stats (read/write): 00:11:39.030 nvme0n1: ios=3634/3680, merge=0/0, ticks=11887/12317, in_queue=24204, util=89.27% 00:11:39.030 nvme0n2: ios=1585/1952, merge=0/0, ticks=11091/14533, in_queue=25624, util=89.78% 00:11:39.030 nvme0n3: ios=1536/1952, merge=0/0, ticks=10092/12790, in_queue=22882, util=88.66% 00:11:39.030 nvme0n4: ios=2560/2592, merge=0/0, ticks=12511/12551, in_queue=25062, util=89.72% 00:11:39.030 04:14:51 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:39.030 [global] 00:11:39.030 thread=1 00:11:39.030 invalidate=1 00:11:39.030 rw=randwrite 00:11:39.030 time_based=1 00:11:39.030 runtime=1 00:11:39.030 ioengine=libaio 00:11:39.030 direct=1 00:11:39.030 bs=4096 00:11:39.030 iodepth=128 00:11:39.030 norandommap=0 00:11:39.030 numjobs=1 00:11:39.030 00:11:39.030 verify_dump=1 00:11:39.030 verify_backlog=512 00:11:39.030 verify_state_save=0 00:11:39.030 do_verify=1 00:11:39.030 verify=crc32c-intel 00:11:39.030 [job0] 00:11:39.030 filename=/dev/nvme0n1 00:11:39.030 [job1] 00:11:39.030 filename=/dev/nvme0n2 00:11:39.030 [job2] 00:11:39.030 filename=/dev/nvme0n3 00:11:39.030 [job3] 00:11:39.030 filename=/dev/nvme0n4 00:11:39.030 Could not set queue depth (nvme0n1) 00:11:39.030 Could not set queue depth (nvme0n2) 00:11:39.030 Could not set queue depth (nvme0n3) 00:11:39.030 Could not set queue depth (nvme0n4) 00:11:39.030 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.030 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.030 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.030 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:39.030 fio-3.35 00:11:39.030 Starting 4 threads 00:11:40.470 00:11:40.470 job0: (groupid=0, jobs=1): err= 0: pid=76139: Fri Dec 6 04:14:52 2024 00:11:40.470 read: IOPS=2318, BW=9275KiB/s (9497kB/s)(9284KiB/1001msec) 00:11:40.470 slat (usec): min=4, max=19337, avg=200.90, stdev=1063.26 00:11:40.470 clat (usec): min=193, max=60527, avg=24070.12, stdev=14912.69 00:11:40.470 lat (usec): min=2051, max=60543, avg=24271.02, stdev=15028.80 00:11:40.470 clat percentiles (usec): 00:11:40.470 | 1.00th=[ 4293], 5.00th=[10290], 10.00th=[11076], 20.00th=[11600], 00:11:40.470 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12649], 60.00th=[31327], 00:11:40.470 | 70.00th=[39060], 80.00th=[41157], 90.00th=[43254], 95.00th=[45351], 00:11:40.470 | 99.00th=[55313], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:11:40.470 | 99.99th=[60556] 00:11:40.470 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:40.470 slat (usec): min=5, max=17770, avg=200.78, stdev=939.46 00:11:40.470 clat (usec): min=8884, max=63986, avg=26581.69, stdev=14860.26 00:11:40.470 lat (usec): min=9100, max=64001, avg=26782.47, stdev=14953.83 00:11:40.470 clat percentiles (usec): 00:11:40.470 | 1.00th=[10290], 5.00th=[11469], 10.00th=[12125], 20.00th=[12780], 00:11:40.470 | 30.00th=[12911], 40.00th=[13173], 50.00th=[16188], 60.00th=[35390], 00:11:40.470 | 70.00th=[40109], 80.00th=[43254], 90.00th=[44827], 95.00th=[46924], 00:11:40.470 | 99.00th=[60031], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:11:40.470 | 99.99th=[64226] 00:11:40.470 bw ( KiB/s): min= 7536, max= 7536, per=14.91%, avg=7536.00, stdev= 0.00, samples=1 00:11:40.470 iops : min= 1884, max= 1884, avg=1884.00, stdev= 0.00, samples=1 00:11:40.470 lat (usec) : 250=0.02% 00:11:40.470 lat (msec) : 4=0.33%, 10=2.36%, 20=50.73%, 50=43.97%, 100=2.60% 00:11:40.470 cpu : usr=2.60%, sys=7.10%, ctx=439, majf=0, minf=17 00:11:40.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:40.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.470 issued rwts: total=2321,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.470 job1: (groupid=0, jobs=1): err= 0: pid=76140: Fri Dec 6 04:14:52 2024 00:11:40.470 read: IOPS=4416, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1002msec) 00:11:40.470 slat (usec): min=7, max=5598, avg=104.01, stdev=489.12 00:11:40.470 clat (usec): min=358, max=19968, avg=13379.37, stdev=2381.33 00:11:40.470 lat (usec): min=2118, max=21492, avg=13483.38, stdev=2387.23 00:11:40.470 clat percentiles (usec): 00:11:40.470 | 1.00th=[ 5932], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11863], 00:11:40.470 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[14091], 00:11:40.470 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16188], 95.00th=[17171], 00:11:40.470 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19530], 99.95th=[20055], 00:11:40.470 | 99.99th=[20055] 00:11:40.471 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:40.471 slat (usec): min=10, max=4712, avg=108.94, stdev=483.05 00:11:40.471 clat (usec): min=9736, max=20924, avg=14590.45, stdev=1818.14 00:11:40.471 lat (usec): min=9754, max=20973, avg=14699.39, stdev=1876.18 00:11:40.471 clat percentiles (usec): 00:11:40.471 | 1.00th=[10814], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:11:40.471 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:11:40.471 | 70.00th=[15533], 80.00th=[16188], 90.00th=[16909], 95.00th=[17433], 00:11:40.471 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20579], 99.95th=[20579], 00:11:40.471 | 99.99th=[20841] 00:11:40.471 bw ( KiB/s): min=17312, max=19552, per=36.46%, avg=18432.00, stdev=1583.92, samples=2 00:11:40.471 iops : min= 4328, max= 4888, avg=4608.00, stdev=395.98, samples=2 00:11:40.471 lat (usec) : 500=0.01% 00:11:40.471 lat (msec) : 4=0.39%, 10=3.03%, 20=96.36%, 50=0.21% 00:11:40.471 cpu : usr=4.00%, sys=14.29%, ctx=446, majf=0, minf=11 00:11:40.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.471 issued rwts: total=4425,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.471 job2: (groupid=0, jobs=1): err= 0: pid=76141: Fri Dec 6 04:14:52 2024 00:11:40.471 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1007msec) 00:11:40.471 slat (usec): min=3, max=9675, avg=146.29, stdev=700.14 00:11:40.471 clat (usec): min=5093, max=41908, avg=18372.71, stdev=5257.12 00:11:40.471 lat (usec): min=7020, max=41944, avg=18519.00, stdev=5300.51 00:11:40.471 clat percentiles (usec): 00:11:40.471 | 1.00th=[ 8160], 5.00th=[13304], 10.00th=[13829], 20.00th=[14484], 00:11:40.471 | 30.00th=[15401], 40.00th=[16188], 50.00th=[16909], 60.00th=[17695], 00:11:40.471 | 70.00th=[18744], 80.00th=[20579], 90.00th=[27132], 95.00th=[29492], 00:11:40.471 | 99.00th=[34341], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:11:40.471 | 99.99th=[41681] 00:11:40.471 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:11:40.471 slat (usec): min=11, max=6600, avg=137.80, stdev=578.77 00:11:40.471 clat (usec): min=11488, max=36081, avg=18804.15, stdev=3920.55 00:11:40.471 lat (usec): min=11523, max=36153, avg=18941.95, stdev=3951.01 00:11:40.471 clat percentiles (usec): 00:11:40.471 | 1.00th=[13829], 5.00th=[15008], 10.00th=[15533], 20.00th=[16319], 00:11:40.471 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:11:40.471 | 70.00th=[18744], 80.00th=[20317], 90.00th=[25297], 95.00th=[27919], 00:11:40.471 | 99.00th=[32375], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:11:40.471 | 99.99th=[35914] 00:11:40.471 bw ( KiB/s): min=12288, max=16416, per=28.39%, avg=14352.00, stdev=2918.94, samples=2 00:11:40.471 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:11:40.471 lat (msec) : 10=0.63%, 20=76.98%, 50=22.39% 00:11:40.471 cpu : usr=3.78%, sys=11.03%, ctx=479, majf=0, minf=13 00:11:40.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.471 issued rwts: total=3241,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.471 job3: (groupid=0, jobs=1): err= 0: pid=76142: Fri Dec 6 04:14:52 2024 00:11:40.471 read: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec) 00:11:40.471 slat (usec): min=6, max=19192, avg=297.29, stdev=1348.77 00:11:40.471 clat (usec): min=22883, max=61098, avg=37269.81, stdev=8316.47 00:11:40.471 lat (usec): min=22977, max=62728, avg=37567.09, stdev=8363.85 00:11:40.471 clat percentiles (usec): 00:11:40.471 | 1.00th=[24511], 5.00th=[25560], 10.00th=[26346], 20.00th=[27919], 00:11:40.471 | 30.00th=[30540], 40.00th=[34866], 50.00th=[39060], 60.00th=[40109], 00:11:40.471 | 70.00th=[41681], 80.00th=[43779], 90.00th=[46924], 95.00th=[52691], 00:11:40.471 | 99.00th=[58459], 99.50th=[59507], 99.90th=[60556], 99.95th=[61080], 00:11:40.471 | 99.99th=[61080] 00:11:40.471 write: IOPS=1964, BW=7857KiB/s (8045kB/s)(7904KiB/1006msec); 0 zone resets 00:11:40.471 slat (usec): min=5, max=18531, avg=267.01, stdev=1120.50 00:11:40.471 clat (usec): min=5731, max=63048, avg=35381.72, stdev=10909.08 00:11:40.471 lat (usec): min=5765, max=64564, avg=35648.73, stdev=10989.96 00:11:40.471 clat percentiles (usec): 00:11:40.471 | 1.00th=[ 7373], 5.00th=[17957], 10.00th=[20579], 20.00th=[25560], 00:11:40.471 | 30.00th=[28181], 40.00th=[31851], 50.00th=[37487], 60.00th=[40633], 00:11:40.471 | 70.00th=[43779], 80.00th=[44827], 90.00th=[48497], 95.00th=[49546], 00:11:40.471 | 99.00th=[58459], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:11:40.471 | 99.99th=[63177] 00:11:40.471 bw ( KiB/s): min= 7112, max= 7695, per=14.64%, avg=7403.50, stdev=412.24, samples=2 00:11:40.471 iops : min= 1778, max= 1923, avg=1850.50, stdev=102.53, samples=2 00:11:40.471 lat (msec) : 10=1.20%, 20=3.27%, 50=91.23%, 100=4.30% 00:11:40.471 cpu : usr=1.59%, sys=5.77%, ctx=498, majf=0, minf=9 00:11:40.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:11:40.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:40.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:40.471 issued rwts: total=1536,1976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:40.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:40.471 00:11:40.471 Run status group 0 (all jobs): 00:11:40.471 READ: bw=44.7MiB/s (46.9MB/s), 6107KiB/s-17.2MiB/s (6254kB/s-18.1MB/s), io=45.0MiB (47.2MB), run=1001-1007msec 00:11:40.471 WRITE: bw=49.4MiB/s (51.8MB/s), 7857KiB/s-18.0MiB/s (8045kB/s-18.8MB/s), io=49.7MiB (52.1MB), run=1001-1007msec 00:11:40.471 00:11:40.471 Disk stats (read/write): 00:11:40.471 nvme0n1: ios=1585/1792, merge=0/0, ticks=22196/26346, in_queue=48542, util=85.73% 00:11:40.471 nvme0n2: ios=3623/3893, merge=0/0, ticks=15852/16832, in_queue=32684, util=89.90% 00:11:40.471 nvme0n3: ios=2993/3072, merge=0/0, ticks=17650/15796, in_queue=33446, util=89.28% 00:11:40.471 nvme0n4: ios=1238/1536, merge=0/0, ticks=23387/27711, in_queue=51098, util=89.21% 00:11:40.471 04:14:52 -- target/fio.sh@55 -- # sync 00:11:40.471 04:14:52 -- target/fio.sh@59 -- # fio_pid=76156 00:11:40.471 04:14:52 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:40.471 04:14:52 -- target/fio.sh@61 -- # sleep 3 00:11:40.471 [global] 00:11:40.471 thread=1 00:11:40.471 invalidate=1 00:11:40.471 rw=read 00:11:40.471 time_based=1 00:11:40.471 runtime=10 00:11:40.471 ioengine=libaio 00:11:40.471 direct=1 00:11:40.471 bs=4096 00:11:40.471 iodepth=1 00:11:40.471 norandommap=1 00:11:40.471 numjobs=1 00:11:40.471 00:11:40.471 [job0] 00:11:40.471 filename=/dev/nvme0n1 00:11:40.471 [job1] 00:11:40.471 filename=/dev/nvme0n2 00:11:40.471 [job2] 00:11:40.471 filename=/dev/nvme0n3 00:11:40.471 [job3] 00:11:40.471 filename=/dev/nvme0n4 00:11:40.471 Could not set queue depth (nvme0n1) 00:11:40.471 Could not set queue depth (nvme0n2) 00:11:40.471 Could not set queue depth (nvme0n3) 00:11:40.471 Could not set queue depth (nvme0n4) 00:11:40.471 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.471 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.471 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.471 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:40.471 fio-3.35 00:11:40.471 Starting 4 threads 00:11:43.758 04:14:55 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:43.759 fio: pid=76199, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:43.759 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=49975296, buflen=4096 00:11:43.759 04:14:55 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:43.759 fio: pid=76198, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:43.759 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=34066432, buflen=4096 00:11:43.759 04:14:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:43.759 04:14:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:44.018 fio: pid=76196, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:44.018 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42463232, buflen=4096 00:11:44.018 04:14:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.018 04:14:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:44.279 fio: pid=76197, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:44.279 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58138624, buflen=4096 00:11:44.279 00:11:44.279 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76196: Fri Dec 6 04:14:56 2024 00:11:44.279 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(40.5MiB/3422msec) 00:11:44.279 slat (usec): min=9, max=15389, avg=25.29, stdev=233.87 00:11:44.279 clat (usec): min=121, max=3466, avg=302.81, stdev=130.44 00:11:44.279 lat (usec): min=132, max=15557, avg=328.10, stdev=269.35 00:11:44.279 clat percentiles (usec): 00:11:44.279 | 1.00th=[ 145], 5.00th=[ 163], 10.00th=[ 180], 20.00th=[ 212], 00:11:44.279 | 30.00th=[ 235], 40.00th=[ 253], 50.00th=[ 269], 60.00th=[ 293], 00:11:44.279 | 70.00th=[ 347], 80.00th=[ 400], 90.00th=[ 465], 95.00th=[ 506], 00:11:44.279 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 1549], 99.95th=[ 2606], 00:11:44.279 | 99.99th=[ 2802] 00:11:44.279 bw ( KiB/s): min= 8408, max=14264, per=23.37%, avg=11250.67, stdev=2954.87, samples=6 00:11:44.279 iops : min= 2102, max= 3566, avg=2812.67, stdev=738.72, samples=6 00:11:44.279 lat (usec) : 250=38.58%, 500=55.76%, 750=5.45%, 1000=0.06% 00:11:44.279 lat (msec) : 2=0.06%, 4=0.09% 00:11:44.279 cpu : usr=1.23%, sys=5.52%, ctx=10373, majf=0, minf=1 00:11:44.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 issued rwts: total=10368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.279 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76197: Fri Dec 6 04:14:56 2024 00:11:44.279 read: IOPS=3790, BW=14.8MiB/s (15.5MB/s)(55.4MiB/3745msec) 00:11:44.279 slat (usec): min=7, max=18061, avg=18.49, stdev=205.01 00:11:44.279 clat (usec): min=115, max=30375, avg=244.17, stdev=266.68 00:11:44.279 lat (usec): min=125, max=30387, avg=262.65, stdev=337.57 00:11:44.279 clat percentiles (usec): 00:11:44.279 | 1.00th=[ 137], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 184], 00:11:44.279 | 30.00th=[ 200], 40.00th=[ 217], 50.00th=[ 233], 60.00th=[ 251], 00:11:44.279 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 355], 00:11:44.279 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 750], 99.95th=[ 1434], 00:11:44.279 | 99.99th=[ 3621] 00:11:44.279 bw ( KiB/s): min=12352, max=17328, per=31.15%, avg=14997.00, stdev=2062.31, samples=7 00:11:44.279 iops : min= 3088, max= 4332, avg=3749.14, stdev=515.56, samples=7 00:11:44.279 lat (usec) : 250=59.85%, 500=39.96%, 750=0.09%, 1000=0.03% 00:11:44.279 lat (msec) : 2=0.03%, 4=0.04%, 50=0.01% 00:11:44.279 cpu : usr=1.36%, sys=4.57%, ctx=14211, majf=0, minf=2 00:11:44.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 issued rwts: total=14195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.279 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76198: Fri Dec 6 04:14:56 2024 00:11:44.279 read: IOPS=2619, BW=10.2MiB/s (10.7MB/s)(32.5MiB/3175msec) 00:11:44.279 slat (usec): min=11, max=9624, avg=24.66, stdev=125.14 00:11:44.279 clat (usec): min=190, max=8104, avg=354.75, stdev=160.95 00:11:44.279 lat (usec): min=204, max=10065, avg=379.40, stdev=206.16 00:11:44.279 clat percentiles (usec): 00:11:44.279 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 269], 00:11:44.279 | 30.00th=[ 285], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 363], 00:11:44.279 | 70.00th=[ 392], 80.00th=[ 433], 90.00th=[ 486], 95.00th=[ 523], 00:11:44.279 | 99.00th=[ 594], 99.50th=[ 676], 99.90th=[ 1565], 99.95th=[ 2376], 00:11:44.279 | 99.99th=[ 8094] 00:11:44.279 bw ( KiB/s): min= 8160, max=13064, per=21.70%, avg=10446.67, stdev=2299.55, samples=6 00:11:44.279 iops : min= 2040, max= 3266, avg=2611.67, stdev=574.89, samples=6 00:11:44.279 lat (usec) : 250=12.23%, 500=79.71%, 750=7.77%, 1000=0.14% 00:11:44.279 lat (msec) : 2=0.07%, 4=0.05%, 10=0.02% 00:11:44.279 cpu : usr=1.35%, sys=5.64%, ctx=8323, majf=0, minf=2 00:11:44.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 issued rwts: total=8318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.279 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76199: Fri Dec 6 04:14:56 2024 00:11:44.279 read: IOPS=4158, BW=16.2MiB/s (17.0MB/s)(47.7MiB/2934msec) 00:11:44.279 slat (nsec): min=11183, max=81703, avg=15733.00, stdev=6389.35 00:11:44.279 clat (usec): min=133, max=2285, avg=223.58, stdev=47.70 00:11:44.279 lat (usec): min=145, max=2297, avg=239.31, stdev=48.06 00:11:44.279 clat percentiles (usec): 00:11:44.279 | 1.00th=[ 151], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 188], 00:11:44.279 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 231], 00:11:44.279 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 297], 00:11:44.279 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 392], 99.95th=[ 523], 00:11:44.279 | 99.99th=[ 1860] 00:11:44.279 bw ( KiB/s): min=16120, max=16856, per=34.57%, avg=16643.20, stdev=312.70, samples=5 00:11:44.279 iops : min= 4030, max= 4214, avg=4160.80, stdev=78.17, samples=5 00:11:44.279 lat (usec) : 250=76.07%, 500=23.86%, 750=0.03%, 1000=0.01% 00:11:44.279 lat (msec) : 2=0.02%, 4=0.01% 00:11:44.279 cpu : usr=1.09%, sys=5.52%, ctx=12203, majf=0, minf=2 00:11:44.279 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.279 issued rwts: total=12202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.279 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.279 00:11:44.279 Run status group 0 (all jobs): 00:11:44.279 READ: bw=47.0MiB/s (49.3MB/s), 10.2MiB/s-16.2MiB/s (10.7MB/s-17.0MB/s), io=176MiB (185MB), run=2934-3745msec 00:11:44.279 00:11:44.279 Disk stats (read/write): 00:11:44.279 nvme0n1: ios=10090/0, merge=0/0, ticks=3161/0, in_queue=3161, util=95.05% 00:11:44.279 nvme0n2: ios=13551/0, merge=0/0, ticks=3242/0, in_queue=3242, util=95.56% 00:11:44.279 nvme0n3: ios=8125/0, merge=0/0, ticks=2913/0, in_queue=2913, util=96.12% 00:11:44.279 nvme0n4: ios=11928/0, merge=0/0, ticks=2727/0, in_queue=2727, util=96.79% 00:11:44.279 04:14:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.279 04:14:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:44.539 04:14:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.539 04:14:57 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:44.798 04:14:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:44.798 04:14:57 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:45.365 04:14:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.365 04:14:57 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:45.622 04:14:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:45.622 04:14:57 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:45.880 04:14:58 -- target/fio.sh@69 -- # fio_status=0 00:11:45.880 04:14:58 -- target/fio.sh@70 -- # wait 76156 00:11:45.880 04:14:58 -- target/fio.sh@70 -- # fio_status=4 00:11:45.880 04:14:58 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.880 04:14:58 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.880 04:14:58 -- common/autotest_common.sh@1208 -- # local i=0 00:11:45.880 04:14:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:45.880 04:14:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.880 04:14:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.880 04:14:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:45.880 nvmf hotplug test: fio failed as expected 00:11:45.880 04:14:58 -- common/autotest_common.sh@1220 -- # return 0 00:11:45.880 04:14:58 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:45.880 04:14:58 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:45.880 04:14:58 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.138 04:14:58 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:46.138 04:14:58 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:46.138 04:14:58 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:46.138 04:14:58 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:46.138 04:14:58 -- target/fio.sh@91 -- # nvmftestfini 00:11:46.138 04:14:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.138 04:14:58 -- nvmf/common.sh@116 -- # sync 00:11:46.138 04:14:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.138 04:14:58 -- nvmf/common.sh@119 -- # set +e 00:11:46.138 04:14:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.138 04:14:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.138 rmmod nvme_tcp 00:11:46.138 rmmod nvme_fabrics 00:11:46.138 rmmod nvme_keyring 00:11:46.396 04:14:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.396 04:14:58 -- nvmf/common.sh@123 -- # set -e 00:11:46.396 04:14:58 -- nvmf/common.sh@124 -- # return 0 00:11:46.396 04:14:58 -- nvmf/common.sh@477 -- # '[' -n 75766 ']' 00:11:46.396 04:14:58 -- nvmf/common.sh@478 -- # killprocess 75766 00:11:46.396 04:14:58 -- common/autotest_common.sh@936 -- # '[' -z 75766 ']' 00:11:46.396 04:14:58 -- common/autotest_common.sh@940 -- # kill -0 75766 00:11:46.396 04:14:58 -- common/autotest_common.sh@941 -- # uname 00:11:46.396 04:14:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.396 04:14:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75766 00:11:46.396 killing process with pid 75766 00:11:46.396 04:14:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:46.396 04:14:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:46.396 04:14:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75766' 00:11:46.396 04:14:58 -- common/autotest_common.sh@955 -- # kill 75766 00:11:46.396 04:14:58 -- common/autotest_common.sh@960 -- # wait 75766 00:11:46.653 04:14:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.653 04:14:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.653 04:14:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.653 04:14:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.653 04:14:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.653 04:14:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.653 04:14:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.653 04:14:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.654 04:14:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:46.654 00:11:46.654 real 0m20.035s 00:11:46.654 user 1m15.831s 00:11:46.654 sys 0m9.835s 00:11:46.654 04:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.654 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.654 ************************************ 00:11:46.654 END TEST nvmf_fio_target 00:11:46.654 ************************************ 00:11:46.654 04:14:59 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:46.654 04:14:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.654 04:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.654 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.654 ************************************ 00:11:46.654 START TEST nvmf_bdevio 00:11:46.654 ************************************ 00:11:46.654 04:14:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:46.654 * Looking for test storage... 00:11:46.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.654 04:14:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:46.654 04:14:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:46.654 04:14:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:46.912 04:14:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:46.912 04:14:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:46.912 04:14:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:46.912 04:14:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:46.912 04:14:59 -- scripts/common.sh@335 -- # IFS=.-: 00:11:46.912 04:14:59 -- scripts/common.sh@335 -- # read -ra ver1 00:11:46.912 04:14:59 -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.912 04:14:59 -- scripts/common.sh@336 -- # read -ra ver2 00:11:46.912 04:14:59 -- scripts/common.sh@337 -- # local 'op=<' 00:11:46.912 04:14:59 -- scripts/common.sh@339 -- # ver1_l=2 00:11:46.912 04:14:59 -- scripts/common.sh@340 -- # ver2_l=1 00:11:46.912 04:14:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:46.912 04:14:59 -- scripts/common.sh@343 -- # case "$op" in 00:11:46.912 04:14:59 -- scripts/common.sh@344 -- # : 1 00:11:46.912 04:14:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:46.912 04:14:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.912 04:14:59 -- scripts/common.sh@364 -- # decimal 1 00:11:46.912 04:14:59 -- scripts/common.sh@352 -- # local d=1 00:11:46.912 04:14:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.912 04:14:59 -- scripts/common.sh@354 -- # echo 1 00:11:46.912 04:14:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:46.912 04:14:59 -- scripts/common.sh@365 -- # decimal 2 00:11:46.912 04:14:59 -- scripts/common.sh@352 -- # local d=2 00:11:46.912 04:14:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.912 04:14:59 -- scripts/common.sh@354 -- # echo 2 00:11:46.912 04:14:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:46.912 04:14:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:46.912 04:14:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:46.912 04:14:59 -- scripts/common.sh@367 -- # return 0 00:11:46.913 04:14:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.913 04:14:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:46.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.913 --rc genhtml_branch_coverage=1 00:11:46.913 --rc genhtml_function_coverage=1 00:11:46.913 --rc genhtml_legend=1 00:11:46.913 --rc geninfo_all_blocks=1 00:11:46.913 --rc geninfo_unexecuted_blocks=1 00:11:46.913 00:11:46.913 ' 00:11:46.913 04:14:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:46.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.913 --rc genhtml_branch_coverage=1 00:11:46.913 --rc genhtml_function_coverage=1 00:11:46.913 --rc genhtml_legend=1 00:11:46.913 --rc geninfo_all_blocks=1 00:11:46.913 --rc geninfo_unexecuted_blocks=1 00:11:46.913 00:11:46.913 ' 00:11:46.913 04:14:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:46.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.913 --rc genhtml_branch_coverage=1 00:11:46.913 --rc genhtml_function_coverage=1 00:11:46.913 --rc genhtml_legend=1 00:11:46.913 --rc geninfo_all_blocks=1 00:11:46.913 --rc geninfo_unexecuted_blocks=1 00:11:46.913 00:11:46.913 ' 00:11:46.913 04:14:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:46.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.913 --rc genhtml_branch_coverage=1 00:11:46.913 --rc genhtml_function_coverage=1 00:11:46.913 --rc genhtml_legend=1 00:11:46.913 --rc geninfo_all_blocks=1 00:11:46.913 --rc geninfo_unexecuted_blocks=1 00:11:46.913 00:11:46.913 ' 00:11:46.913 04:14:59 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.913 04:14:59 -- nvmf/common.sh@7 -- # uname -s 00:11:46.913 04:14:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.913 04:14:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.913 04:14:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.913 04:14:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.913 04:14:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.913 04:14:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.913 04:14:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.913 04:14:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.913 04:14:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.913 04:14:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:46.913 04:14:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:46.913 04:14:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.913 04:14:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.913 04:14:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.913 04:14:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.913 04:14:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.913 04:14:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.913 04:14:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.913 04:14:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.913 04:14:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.913 04:14:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.913 04:14:59 -- paths/export.sh@5 -- # export PATH 00:11:46.913 04:14:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.913 04:14:59 -- nvmf/common.sh@46 -- # : 0 00:11:46.913 04:14:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:46.913 04:14:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:46.913 04:14:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:46.913 04:14:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.913 04:14:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.913 04:14:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:46.913 04:14:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:46.913 04:14:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:46.913 04:14:59 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.913 04:14:59 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.913 04:14:59 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:46.913 04:14:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:46.913 04:14:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.913 04:14:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:46.913 04:14:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:46.913 04:14:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:46.913 04:14:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.913 04:14:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.913 04:14:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.913 04:14:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:46.913 04:14:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:46.913 04:14:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.913 04:14:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.913 04:14:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:46.913 04:14:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:46.913 04:14:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.913 04:14:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.913 04:14:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.913 04:14:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.913 04:14:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.913 04:14:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.913 04:14:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.913 04:14:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.913 04:14:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:46.913 04:14:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:46.913 Cannot find device "nvmf_tgt_br" 00:11:46.913 04:14:59 -- nvmf/common.sh@154 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.913 Cannot find device "nvmf_tgt_br2" 00:11:46.913 04:14:59 -- nvmf/common.sh@155 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:46.913 04:14:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:46.913 Cannot find device "nvmf_tgt_br" 00:11:46.913 04:14:59 -- nvmf/common.sh@157 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:46.913 Cannot find device "nvmf_tgt_br2" 00:11:46.913 04:14:59 -- nvmf/common.sh@158 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:46.913 04:14:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:46.913 04:14:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.913 04:14:59 -- nvmf/common.sh@161 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.913 04:14:59 -- nvmf/common.sh@162 -- # true 00:11:46.913 04:14:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.913 04:14:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.913 04:14:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.913 04:14:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.913 04:14:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.172 04:14:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.172 04:14:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.172 04:14:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.172 04:14:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.172 04:14:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:47.172 04:14:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:47.172 04:14:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:47.172 04:14:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:47.172 04:14:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.172 04:14:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.172 04:14:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.172 04:14:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:47.172 04:14:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:47.172 04:14:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.172 04:14:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.172 04:14:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.172 04:14:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.172 04:14:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.172 04:14:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:47.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:11:47.172 00:11:47.172 --- 10.0.0.2 ping statistics --- 00:11:47.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.172 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:47.172 04:14:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:47.172 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.172 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:47.172 00:11:47.172 --- 10.0.0.3 ping statistics --- 00:11:47.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.172 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:47.172 04:14:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:11:47.172 00:11:47.172 --- 10.0.0.1 ping statistics --- 00:11:47.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.172 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:47.172 04:14:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.172 04:14:59 -- nvmf/common.sh@421 -- # return 0 00:11:47.172 04:14:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:47.172 04:14:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.172 04:14:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:47.172 04:14:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:47.172 04:14:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.172 04:14:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:47.172 04:14:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.172 04:14:59 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:47.172 04:14:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.172 04:14:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.172 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:47.172 04:14:59 -- nvmf/common.sh@469 -- # nvmfpid=76477 00:11:47.172 04:14:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:47.172 04:14:59 -- nvmf/common.sh@470 -- # waitforlisten 76477 00:11:47.172 04:14:59 -- common/autotest_common.sh@829 -- # '[' -z 76477 ']' 00:11:47.172 04:14:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.172 04:14:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.172 04:14:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.172 04:14:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.172 04:14:59 -- common/autotest_common.sh@10 -- # set +x 00:11:47.172 [2024-12-06 04:14:59.696361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:47.172 [2024-12-06 04:14:59.697042] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.431 [2024-12-06 04:14:59.839818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.431 [2024-12-06 04:14:59.927681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.431 [2024-12-06 04:14:59.927826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.431 [2024-12-06 04:14:59.927838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.431 [2024-12-06 04:14:59.927846] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.431 [2024-12-06 04:14:59.928004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:47.431 [2024-12-06 04:14:59.928924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:47.431 [2024-12-06 04:14:59.929061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:47.431 [2024-12-06 04:14:59.929069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.368 04:15:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.368 04:15:00 -- common/autotest_common.sh@862 -- # return 0 00:11:48.368 04:15:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.368 04:15:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 04:15:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.368 04:15:00 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.368 04:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 [2024-12-06 04:15:00.711991] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.368 04:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.368 04:15:00 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.368 04:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 Malloc0 00:11:48.368 04:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.368 04:15:00 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.368 04:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 04:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.368 04:15:00 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.368 04:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 04:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.368 04:15:00 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.368 04:15:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.368 04:15:00 -- common/autotest_common.sh@10 -- # set +x 00:11:48.368 [2024-12-06 04:15:00.779178] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.368 04:15:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.368 04:15:00 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:48.368 04:15:00 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:48.368 04:15:00 -- nvmf/common.sh@520 -- # config=() 00:11:48.368 04:15:00 -- nvmf/common.sh@520 -- # local subsystem config 00:11:48.368 04:15:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:48.368 04:15:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:48.368 { 00:11:48.368 "params": { 00:11:48.368 "name": "Nvme$subsystem", 00:11:48.368 "trtype": "$TEST_TRANSPORT", 00:11:48.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.368 "adrfam": "ipv4", 00:11:48.368 "trsvcid": "$NVMF_PORT", 00:11:48.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.368 "hdgst": ${hdgst:-false}, 00:11:48.368 "ddgst": ${ddgst:-false} 00:11:48.368 }, 00:11:48.368 "method": "bdev_nvme_attach_controller" 00:11:48.368 } 00:11:48.368 EOF 00:11:48.368 )") 00:11:48.368 04:15:00 -- nvmf/common.sh@542 -- # cat 00:11:48.368 04:15:00 -- nvmf/common.sh@544 -- # jq . 00:11:48.368 04:15:00 -- nvmf/common.sh@545 -- # IFS=, 00:11:48.368 04:15:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:48.368 "params": { 00:11:48.368 "name": "Nvme1", 00:11:48.368 "trtype": "tcp", 00:11:48.368 "traddr": "10.0.0.2", 00:11:48.368 "adrfam": "ipv4", 00:11:48.368 "trsvcid": "4420", 00:11:48.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.368 "hdgst": false, 00:11:48.368 "ddgst": false 00:11:48.368 }, 00:11:48.368 "method": "bdev_nvme_attach_controller" 00:11:48.368 }' 00:11:48.368 [2024-12-06 04:15:00.838529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:48.368 [2024-12-06 04:15:00.838633] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76515 ] 00:11:48.627 [2024-12-06 04:15:00.979531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.627 [2024-12-06 04:15:01.048744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.627 [2024-12-06 04:15:01.048868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.627 [2024-12-06 04:15:01.048871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.886 [2024-12-06 04:15:01.216444] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:48.886 [2024-12-06 04:15:01.216486] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:48.886 I/O targets: 00:11:48.886 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:48.886 00:11:48.886 00:11:48.886 CUnit - A unit testing framework for C - Version 2.1-3 00:11:48.886 http://cunit.sourceforge.net/ 00:11:48.886 00:11:48.886 00:11:48.886 Suite: bdevio tests on: Nvme1n1 00:11:48.886 Test: blockdev write read block ...passed 00:11:48.886 Test: blockdev write zeroes read block ...passed 00:11:48.886 Test: blockdev write zeroes read no split ...passed 00:11:48.886 Test: blockdev write zeroes read split ...passed 00:11:48.886 Test: blockdev write zeroes read split partial ...passed 00:11:48.886 Test: blockdev reset ...[2024-12-06 04:15:01.251836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:48.886 [2024-12-06 04:15:01.251933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc76ea0 (9): Bad file descriptor 00:11:48.886 [2024-12-06 04:15:01.265548] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:48.886 passed 00:11:48.886 Test: blockdev write read 8 blocks ...passed 00:11:48.886 Test: blockdev write read size > 128k ...passed 00:11:48.886 Test: blockdev write read invalid size ...passed 00:11:48.886 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:48.886 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:48.886 Test: blockdev write read max offset ...passed 00:11:48.886 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:48.886 Test: blockdev writev readv 8 blocks ...passed 00:11:48.886 Test: blockdev writev readv 30 x 1block ...passed 00:11:48.886 Test: blockdev writev readv block ...passed 00:11:48.886 Test: blockdev writev readv size > 128k ...passed 00:11:48.886 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:48.886 Test: blockdev comparev and writev ...[2024-12-06 04:15:01.273690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.886 [2024-12-06 04:15:01.273857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:48.886 [2024-12-06 04:15:01.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.886 [2024-12-06 04:15:01.274067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:48.886 [2024-12-06 04:15:01.274566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.886 [2024-12-06 04:15:01.274712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:48.886 [2024-12-06 04:15:01.274826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.886 [2024-12-06 04:15:01.274921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:48.886 [2024-12-06 04:15:01.275346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.886 [2024-12-06 04:15:01.275500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.275623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.887 [2024-12-06 04:15:01.275708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.276136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.887 [2024-12-06 04:15:01.276260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:48.887 [2024-12-06 04:15:01.276448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:48.887 passed 00:11:48.887 Test: blockdev nvme passthru rw ...passed 00:11:48.887 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:15:01.277425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:48.887 [2024-12-06 04:15:01.277579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.277814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:48.887 [2024-12-06 04:15:01.277935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.278184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:48.887 [2024-12-06 04:15:01.278302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:48.887 [2024-12-06 04:15:01.278545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:48.887 [2024-12-06 04:15:01.278669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:48.887 passed 00:11:48.887 Test: blockdev nvme admin passthru ...passed 00:11:48.887 Test: blockdev copy ...passed 00:11:48.887 00:11:48.887 Run Summary: Type Total Ran Passed Failed Inactive 00:11:48.887 suites 1 1 n/a 0 0 00:11:48.887 tests 23 23 23 0 0 00:11:48.887 asserts 152 152 152 0 n/a 00:11:48.887 00:11:48.887 Elapsed time = 0.153 seconds 00:11:49.146 04:15:01 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.146 04:15:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.146 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:49.146 04:15:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.146 04:15:01 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:49.146 04:15:01 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:49.146 04:15:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:49.146 04:15:01 -- nvmf/common.sh@116 -- # sync 00:11:49.146 04:15:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:49.146 04:15:01 -- nvmf/common.sh@119 -- # set +e 00:11:49.146 04:15:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:49.146 04:15:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:49.146 rmmod nvme_tcp 00:11:49.146 rmmod nvme_fabrics 00:11:49.146 rmmod nvme_keyring 00:11:49.146 04:15:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:49.146 04:15:01 -- nvmf/common.sh@123 -- # set -e 00:11:49.146 04:15:01 -- nvmf/common.sh@124 -- # return 0 00:11:49.146 04:15:01 -- nvmf/common.sh@477 -- # '[' -n 76477 ']' 00:11:49.146 04:15:01 -- nvmf/common.sh@478 -- # killprocess 76477 00:11:49.146 04:15:01 -- common/autotest_common.sh@936 -- # '[' -z 76477 ']' 00:11:49.146 04:15:01 -- common/autotest_common.sh@940 -- # kill -0 76477 00:11:49.146 04:15:01 -- common/autotest_common.sh@941 -- # uname 00:11:49.146 04:15:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:49.146 04:15:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76477 00:11:49.146 killing process with pid 76477 00:11:49.146 04:15:01 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:49.146 04:15:01 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:49.146 04:15:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76477' 00:11:49.146 04:15:01 -- common/autotest_common.sh@955 -- # kill 76477 00:11:49.146 04:15:01 -- common/autotest_common.sh@960 -- # wait 76477 00:11:49.405 04:15:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:49.405 04:15:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:49.405 04:15:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:49.405 04:15:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.405 04:15:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:49.405 04:15:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.405 04:15:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.405 04:15:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.405 04:15:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:49.405 ************************************ 00:11:49.405 END TEST nvmf_bdevio 00:11:49.405 ************************************ 00:11:49.405 00:11:49.405 real 0m2.822s 00:11:49.405 user 0m9.013s 00:11:49.405 sys 0m0.810s 00:11:49.405 04:15:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:49.405 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:49.405 04:15:01 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:49.405 04:15:01 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:49.405 04:15:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:49.405 04:15:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.405 04:15:01 -- common/autotest_common.sh@10 -- # set +x 00:11:49.405 ************************************ 00:11:49.405 START TEST nvmf_bdevio_no_huge 00:11:49.405 ************************************ 00:11:49.405 04:15:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:49.665 * Looking for test storage... 00:11:49.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.665 04:15:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:49.665 04:15:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:49.665 04:15:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:49.665 04:15:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:49.665 04:15:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:49.665 04:15:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:49.665 04:15:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:49.665 04:15:02 -- scripts/common.sh@335 -- # IFS=.-: 00:11:49.665 04:15:02 -- scripts/common.sh@335 -- # read -ra ver1 00:11:49.665 04:15:02 -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.665 04:15:02 -- scripts/common.sh@336 -- # read -ra ver2 00:11:49.665 04:15:02 -- scripts/common.sh@337 -- # local 'op=<' 00:11:49.665 04:15:02 -- scripts/common.sh@339 -- # ver1_l=2 00:11:49.665 04:15:02 -- scripts/common.sh@340 -- # ver2_l=1 00:11:49.665 04:15:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:49.665 04:15:02 -- scripts/common.sh@343 -- # case "$op" in 00:11:49.665 04:15:02 -- scripts/common.sh@344 -- # : 1 00:11:49.665 04:15:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:49.665 04:15:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.665 04:15:02 -- scripts/common.sh@364 -- # decimal 1 00:11:49.665 04:15:02 -- scripts/common.sh@352 -- # local d=1 00:11:49.665 04:15:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.665 04:15:02 -- scripts/common.sh@354 -- # echo 1 00:11:49.665 04:15:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:49.665 04:15:02 -- scripts/common.sh@365 -- # decimal 2 00:11:49.665 04:15:02 -- scripts/common.sh@352 -- # local d=2 00:11:49.665 04:15:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.665 04:15:02 -- scripts/common.sh@354 -- # echo 2 00:11:49.665 04:15:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:49.665 04:15:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:49.665 04:15:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:49.665 04:15:02 -- scripts/common.sh@367 -- # return 0 00:11:49.665 04:15:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.665 04:15:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.665 --rc genhtml_branch_coverage=1 00:11:49.665 --rc genhtml_function_coverage=1 00:11:49.665 --rc genhtml_legend=1 00:11:49.665 --rc geninfo_all_blocks=1 00:11:49.665 --rc geninfo_unexecuted_blocks=1 00:11:49.665 00:11:49.665 ' 00:11:49.665 04:15:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.665 --rc genhtml_branch_coverage=1 00:11:49.665 --rc genhtml_function_coverage=1 00:11:49.665 --rc genhtml_legend=1 00:11:49.665 --rc geninfo_all_blocks=1 00:11:49.665 --rc geninfo_unexecuted_blocks=1 00:11:49.665 00:11:49.665 ' 00:11:49.665 04:15:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.665 --rc genhtml_branch_coverage=1 00:11:49.665 --rc genhtml_function_coverage=1 00:11:49.665 --rc genhtml_legend=1 00:11:49.665 --rc geninfo_all_blocks=1 00:11:49.665 --rc geninfo_unexecuted_blocks=1 00:11:49.665 00:11:49.665 ' 00:11:49.665 04:15:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:49.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.665 --rc genhtml_branch_coverage=1 00:11:49.665 --rc genhtml_function_coverage=1 00:11:49.665 --rc genhtml_legend=1 00:11:49.665 --rc geninfo_all_blocks=1 00:11:49.665 --rc geninfo_unexecuted_blocks=1 00:11:49.665 00:11:49.665 ' 00:11:49.665 04:15:02 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.665 04:15:02 -- nvmf/common.sh@7 -- # uname -s 00:11:49.665 04:15:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.665 04:15:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.665 04:15:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.665 04:15:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.665 04:15:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.665 04:15:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.665 04:15:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.665 04:15:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.665 04:15:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.665 04:15:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.665 04:15:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:49.665 04:15:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:49.665 04:15:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.665 04:15:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.665 04:15:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.665 04:15:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.665 04:15:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.665 04:15:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.665 04:15:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.665 04:15:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.665 04:15:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.665 04:15:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.665 04:15:02 -- paths/export.sh@5 -- # export PATH 00:11:49.665 04:15:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.665 04:15:02 -- nvmf/common.sh@46 -- # : 0 00:11:49.665 04:15:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:49.665 04:15:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:49.665 04:15:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:49.665 04:15:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.665 04:15:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.665 04:15:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:49.665 04:15:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:49.665 04:15:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:49.665 04:15:02 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.665 04:15:02 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.665 04:15:02 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:49.665 04:15:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:49.665 04:15:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.665 04:15:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:49.665 04:15:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:49.665 04:15:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:49.665 04:15:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.665 04:15:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.665 04:15:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.665 04:15:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:49.665 04:15:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:49.665 04:15:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:49.665 04:15:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:49.666 04:15:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:49.666 04:15:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:49.666 04:15:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.666 04:15:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.666 04:15:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.666 04:15:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:49.666 04:15:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.666 04:15:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.666 04:15:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.666 04:15:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.666 04:15:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.666 04:15:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.666 04:15:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.666 04:15:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.666 04:15:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:49.666 04:15:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:49.666 Cannot find device "nvmf_tgt_br" 00:11:49.666 04:15:02 -- nvmf/common.sh@154 -- # true 00:11:49.666 04:15:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.924 Cannot find device "nvmf_tgt_br2" 00:11:49.924 04:15:02 -- nvmf/common.sh@155 -- # true 00:11:49.924 04:15:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:49.924 04:15:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:49.924 Cannot find device "nvmf_tgt_br" 00:11:49.924 04:15:02 -- nvmf/common.sh@157 -- # true 00:11:49.924 04:15:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:49.924 Cannot find device "nvmf_tgt_br2" 00:11:49.924 04:15:02 -- nvmf/common.sh@158 -- # true 00:11:49.924 04:15:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:49.924 04:15:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:49.924 04:15:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.925 04:15:02 -- nvmf/common.sh@161 -- # true 00:11:49.925 04:15:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.925 04:15:02 -- nvmf/common.sh@162 -- # true 00:11:49.925 04:15:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.925 04:15:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.925 04:15:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.925 04:15:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.925 04:15:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.925 04:15:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.925 04:15:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.925 04:15:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.925 04:15:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.925 04:15:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:49.925 04:15:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:49.925 04:15:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:49.925 04:15:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:49.925 04:15:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.925 04:15:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.925 04:15:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.925 04:15:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:49.925 04:15:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:49.925 04:15:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:50.183 04:15:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:50.183 04:15:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:50.183 04:15:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:50.183 04:15:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:50.183 04:15:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:50.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:50.183 00:11:50.183 --- 10.0.0.2 ping statistics --- 00:11:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.183 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:50.183 04:15:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:50.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:11:50.183 00:11:50.183 --- 10.0.0.3 ping statistics --- 00:11:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.183 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:50.183 04:15:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:11:50.183 00:11:50.183 --- 10.0.0.1 ping statistics --- 00:11:50.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.183 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:50.183 04:15:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.183 04:15:02 -- nvmf/common.sh@421 -- # return 0 00:11:50.183 04:15:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:50.183 04:15:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.183 04:15:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:50.183 04:15:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:50.183 04:15:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.183 04:15:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:50.183 04:15:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:50.183 04:15:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:50.183 04:15:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:50.183 04:15:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.183 04:15:02 -- common/autotest_common.sh@10 -- # set +x 00:11:50.183 04:15:02 -- nvmf/common.sh@469 -- # nvmfpid=76703 00:11:50.183 04:15:02 -- nvmf/common.sh@470 -- # waitforlisten 76703 00:11:50.183 04:15:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:50.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.183 04:15:02 -- common/autotest_common.sh@829 -- # '[' -z 76703 ']' 00:11:50.183 04:15:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.183 04:15:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.183 04:15:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.183 04:15:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.183 04:15:02 -- common/autotest_common.sh@10 -- # set +x 00:11:50.183 [2024-12-06 04:15:02.615251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:50.184 [2024-12-06 04:15:02.615348] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:50.442 [2024-12-06 04:15:02.764945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.442 [2024-12-06 04:15:02.852823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:50.442 [2024-12-06 04:15:02.852984] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.442 [2024-12-06 04:15:02.852996] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.442 [2024-12-06 04:15:02.853004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.442 [2024-12-06 04:15:02.853165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:50.442 [2024-12-06 04:15:02.853650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:50.442 [2024-12-06 04:15:02.853847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:50.442 [2024-12-06 04:15:02.853875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.011 04:15:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.011 04:15:03 -- common/autotest_common.sh@862 -- # return 0 00:11:51.011 04:15:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:51.011 04:15:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.011 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.323 04:15:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.323 04:15:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.323 04:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.323 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.323 [2024-12-06 04:15:03.615211] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.323 04:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.323 04:15:03 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:51.323 04:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.323 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.323 Malloc0 00:11:51.323 04:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.323 04:15:03 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:51.324 04:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 04:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 04:15:03 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.324 04:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 04:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 04:15:03 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.324 04:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.324 04:15:03 -- common/autotest_common.sh@10 -- # set +x 00:11:51.324 [2024-12-06 04:15:03.660391] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.324 04:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.324 04:15:03 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:51.324 04:15:03 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:51.324 04:15:03 -- nvmf/common.sh@520 -- # config=() 00:11:51.324 04:15:03 -- nvmf/common.sh@520 -- # local subsystem config 00:11:51.324 04:15:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:51.324 04:15:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:51.324 { 00:11:51.324 "params": { 00:11:51.324 "name": "Nvme$subsystem", 00:11:51.324 "trtype": "$TEST_TRANSPORT", 00:11:51.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:51.324 "adrfam": "ipv4", 00:11:51.324 "trsvcid": "$NVMF_PORT", 00:11:51.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:51.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:51.324 "hdgst": ${hdgst:-false}, 00:11:51.324 "ddgst": ${ddgst:-false} 00:11:51.324 }, 00:11:51.324 "method": "bdev_nvme_attach_controller" 00:11:51.324 } 00:11:51.324 EOF 00:11:51.324 )") 00:11:51.324 04:15:03 -- nvmf/common.sh@542 -- # cat 00:11:51.324 04:15:03 -- nvmf/common.sh@544 -- # jq . 00:11:51.324 04:15:03 -- nvmf/common.sh@545 -- # IFS=, 00:11:51.324 04:15:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:51.324 "params": { 00:11:51.324 "name": "Nvme1", 00:11:51.324 "trtype": "tcp", 00:11:51.324 "traddr": "10.0.0.2", 00:11:51.324 "adrfam": "ipv4", 00:11:51.324 "trsvcid": "4420", 00:11:51.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:51.324 "hdgst": false, 00:11:51.324 "ddgst": false 00:11:51.324 }, 00:11:51.324 "method": "bdev_nvme_attach_controller" 00:11:51.324 }' 00:11:51.324 [2024-12-06 04:15:03.720768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:51.324 [2024-12-06 04:15:03.720952] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76739 ] 00:11:51.324 [2024-12-06 04:15:03.867584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:51.582 [2024-12-06 04:15:03.991560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.582 [2024-12-06 04:15:03.991699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.582 [2024-12-06 04:15:03.991706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.841 [2024-12-06 04:15:04.170055] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:51.841 [2024-12-06 04:15:04.170340] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:51.841 I/O targets: 00:11:51.841 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:51.841 00:11:51.841 00:11:51.841 CUnit - A unit testing framework for C - Version 2.1-3 00:11:51.841 http://cunit.sourceforge.net/ 00:11:51.841 00:11:51.841 00:11:51.841 Suite: bdevio tests on: Nvme1n1 00:11:51.841 Test: blockdev write read block ...passed 00:11:51.841 Test: blockdev write zeroes read block ...passed 00:11:51.841 Test: blockdev write zeroes read no split ...passed 00:11:51.841 Test: blockdev write zeroes read split ...passed 00:11:51.841 Test: blockdev write zeroes read split partial ...passed 00:11:51.841 Test: blockdev reset ...[2024-12-06 04:15:04.213860] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:51.841 [2024-12-06 04:15:04.214122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf9260 (9): Bad file descriptor 00:11:51.841 [2024-12-06 04:15:04.228734] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:51.841 passed 00:11:51.841 Test: blockdev write read 8 blocks ...passed 00:11:51.841 Test: blockdev write read size > 128k ...passed 00:11:51.841 Test: blockdev write read invalid size ...passed 00:11:51.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:51.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:51.841 Test: blockdev write read max offset ...passed 00:11:51.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:51.841 Test: blockdev writev readv 8 blocks ...passed 00:11:51.841 Test: blockdev writev readv 30 x 1block ...passed 00:11:51.841 Test: blockdev writev readv block ...passed 00:11:51.841 Test: blockdev writev readv size > 128k ...passed 00:11:51.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:51.841 Test: blockdev comparev and writev ...[2024-12-06 04:15:04.239332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.239546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.239575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.239587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.239973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.239999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.240025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.240044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.240403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.240424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.240441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.240451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.241296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.241326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.241344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:51.841 [2024-12-06 04:15:04.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:51.841 passed 00:11:51.841 Test: blockdev nvme passthru rw ...passed 00:11:51.841 Test: blockdev nvme passthru vendor specific ...[2024-12-06 04:15:04.242449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:51.841 [2024-12-06 04:15:04.242479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.242591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:51.841 [2024-12-06 04:15:04.242607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:51.841 [2024-12-06 04:15:04.242720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:51.841 [2024-12-06 04:15:04.242740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:51.841 passed 00:11:51.841 Test: blockdev nvme admin passthru ...[2024-12-06 04:15:04.242871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:51.841 [2024-12-06 04:15:04.242891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:51.841 passed 00:11:51.841 Test: blockdev copy ...passed 00:11:51.841 00:11:51.841 Run Summary: Type Total Ran Passed Failed Inactive 00:11:51.841 suites 1 1 n/a 0 0 00:11:51.841 tests 23 23 23 0 0 00:11:51.841 asserts 152 152 152 0 n/a 00:11:51.841 00:11:51.841 Elapsed time = 0.169 seconds 00:11:52.100 04:15:04 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.100 04:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.100 04:15:04 -- common/autotest_common.sh@10 -- # set +x 00:11:52.100 04:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.100 04:15:04 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:52.100 04:15:04 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:52.100 04:15:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:52.100 04:15:04 -- nvmf/common.sh@116 -- # sync 00:11:52.100 04:15:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:52.100 04:15:04 -- nvmf/common.sh@119 -- # set +e 00:11:52.100 04:15:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:52.100 04:15:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:52.100 rmmod nvme_tcp 00:11:52.359 rmmod nvme_fabrics 00:11:52.359 rmmod nvme_keyring 00:11:52.359 04:15:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:52.359 04:15:04 -- nvmf/common.sh@123 -- # set -e 00:11:52.359 04:15:04 -- nvmf/common.sh@124 -- # return 0 00:11:52.359 04:15:04 -- nvmf/common.sh@477 -- # '[' -n 76703 ']' 00:11:52.359 04:15:04 -- nvmf/common.sh@478 -- # killprocess 76703 00:11:52.359 04:15:04 -- common/autotest_common.sh@936 -- # '[' -z 76703 ']' 00:11:52.359 04:15:04 -- common/autotest_common.sh@940 -- # kill -0 76703 00:11:52.359 04:15:04 -- common/autotest_common.sh@941 -- # uname 00:11:52.359 04:15:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.359 04:15:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76703 00:11:52.359 04:15:04 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:52.359 04:15:04 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:52.359 killing process with pid 76703 00:11:52.359 04:15:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76703' 00:11:52.359 04:15:04 -- common/autotest_common.sh@955 -- # kill 76703 00:11:52.359 04:15:04 -- common/autotest_common.sh@960 -- # wait 76703 00:11:52.947 04:15:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:52.947 04:15:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:52.947 04:15:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:52.947 04:15:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.947 04:15:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:52.947 04:15:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.947 04:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.947 04:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.947 04:15:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:52.947 00:11:52.947 real 0m3.270s 00:11:52.947 user 0m10.337s 00:11:52.947 sys 0m1.355s 00:11:52.947 04:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.947 ************************************ 00:11:52.947 END TEST nvmf_bdevio_no_huge 00:11:52.947 ************************************ 00:11:52.947 04:15:05 -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 04:15:05 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:52.947 04:15:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.947 04:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.947 04:15:05 -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 ************************************ 00:11:52.947 START TEST nvmf_tls 00:11:52.947 ************************************ 00:11:52.947 04:15:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:52.947 * Looking for test storage... 00:11:52.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.947 04:15:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:52.947 04:15:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:52.947 04:15:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:52.947 04:15:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:52.947 04:15:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:52.947 04:15:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:52.947 04:15:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:52.947 04:15:05 -- scripts/common.sh@335 -- # IFS=.-: 00:11:52.947 04:15:05 -- scripts/common.sh@335 -- # read -ra ver1 00:11:52.947 04:15:05 -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.947 04:15:05 -- scripts/common.sh@336 -- # read -ra ver2 00:11:52.947 04:15:05 -- scripts/common.sh@337 -- # local 'op=<' 00:11:52.947 04:15:05 -- scripts/common.sh@339 -- # ver1_l=2 00:11:52.947 04:15:05 -- scripts/common.sh@340 -- # ver2_l=1 00:11:52.947 04:15:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:52.947 04:15:05 -- scripts/common.sh@343 -- # case "$op" in 00:11:52.947 04:15:05 -- scripts/common.sh@344 -- # : 1 00:11:52.947 04:15:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:52.947 04:15:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.947 04:15:05 -- scripts/common.sh@364 -- # decimal 1 00:11:52.947 04:15:05 -- scripts/common.sh@352 -- # local d=1 00:11:52.947 04:15:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.947 04:15:05 -- scripts/common.sh@354 -- # echo 1 00:11:52.947 04:15:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:52.948 04:15:05 -- scripts/common.sh@365 -- # decimal 2 00:11:52.948 04:15:05 -- scripts/common.sh@352 -- # local d=2 00:11:52.948 04:15:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.948 04:15:05 -- scripts/common.sh@354 -- # echo 2 00:11:52.948 04:15:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:52.948 04:15:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:52.948 04:15:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:52.948 04:15:05 -- scripts/common.sh@367 -- # return 0 00:11:52.948 04:15:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.948 04:15:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.948 --rc genhtml_branch_coverage=1 00:11:52.948 --rc genhtml_function_coverage=1 00:11:52.948 --rc genhtml_legend=1 00:11:52.948 --rc geninfo_all_blocks=1 00:11:52.948 --rc geninfo_unexecuted_blocks=1 00:11:52.948 00:11:52.948 ' 00:11:52.948 04:15:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.948 --rc genhtml_branch_coverage=1 00:11:52.948 --rc genhtml_function_coverage=1 00:11:52.948 --rc genhtml_legend=1 00:11:52.948 --rc geninfo_all_blocks=1 00:11:52.948 --rc geninfo_unexecuted_blocks=1 00:11:52.948 00:11:52.948 ' 00:11:52.948 04:15:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.948 --rc genhtml_branch_coverage=1 00:11:52.948 --rc genhtml_function_coverage=1 00:11:52.948 --rc genhtml_legend=1 00:11:52.948 --rc geninfo_all_blocks=1 00:11:52.948 --rc geninfo_unexecuted_blocks=1 00:11:52.948 00:11:52.948 ' 00:11:52.948 04:15:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:52.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.948 --rc genhtml_branch_coverage=1 00:11:52.948 --rc genhtml_function_coverage=1 00:11:52.948 --rc genhtml_legend=1 00:11:52.948 --rc geninfo_all_blocks=1 00:11:52.948 --rc geninfo_unexecuted_blocks=1 00:11:52.948 00:11:52.948 ' 00:11:52.948 04:15:05 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.948 04:15:05 -- nvmf/common.sh@7 -- # uname -s 00:11:52.948 04:15:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.948 04:15:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.948 04:15:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.948 04:15:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.948 04:15:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.948 04:15:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.948 04:15:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.948 04:15:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.948 04:15:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.948 04:15:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.948 04:15:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:52.948 04:15:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:11:52.948 04:15:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.948 04:15:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.948 04:15:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.948 04:15:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.948 04:15:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.948 04:15:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.948 04:15:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.948 04:15:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.948 04:15:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.948 04:15:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.948 04:15:05 -- paths/export.sh@5 -- # export PATH 00:11:52.948 04:15:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.948 04:15:05 -- nvmf/common.sh@46 -- # : 0 00:11:52.948 04:15:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.948 04:15:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.948 04:15:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.948 04:15:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.948 04:15:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.948 04:15:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.948 04:15:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.948 04:15:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:53.224 04:15:05 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.224 04:15:05 -- target/tls.sh@71 -- # nvmftestinit 00:11:53.224 04:15:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:53.224 04:15:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.224 04:15:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:53.224 04:15:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:53.224 04:15:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:53.224 04:15:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.224 04:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.224 04:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.224 04:15:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:53.224 04:15:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:53.224 04:15:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:53.224 04:15:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:53.224 04:15:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:53.224 04:15:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:53.224 04:15:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.224 04:15:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.224 04:15:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:53.224 04:15:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:53.224 04:15:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:53.224 04:15:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:53.224 04:15:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:53.224 04:15:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.224 04:15:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:53.224 04:15:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:53.224 04:15:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:53.224 04:15:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:53.224 04:15:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:53.224 04:15:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:53.224 Cannot find device "nvmf_tgt_br" 00:11:53.224 04:15:05 -- nvmf/common.sh@154 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.224 Cannot find device "nvmf_tgt_br2" 00:11:53.224 04:15:05 -- nvmf/common.sh@155 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:53.224 04:15:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:53.224 Cannot find device "nvmf_tgt_br" 00:11:53.224 04:15:05 -- nvmf/common.sh@157 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:53.224 Cannot find device "nvmf_tgt_br2" 00:11:53.224 04:15:05 -- nvmf/common.sh@158 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:53.224 04:15:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:53.224 04:15:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.224 04:15:05 -- nvmf/common.sh@161 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.224 04:15:05 -- nvmf/common.sh@162 -- # true 00:11:53.224 04:15:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:53.224 04:15:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:53.224 04:15:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:53.224 04:15:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:53.224 04:15:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:53.224 04:15:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:53.224 04:15:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:53.224 04:15:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:53.225 04:15:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:53.225 04:15:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:53.225 04:15:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:53.225 04:15:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:53.225 04:15:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:53.225 04:15:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:53.225 04:15:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:53.225 04:15:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:53.483 04:15:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:53.483 04:15:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:53.483 04:15:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.483 04:15:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.483 04:15:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.483 04:15:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.483 04:15:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.483 04:15:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:53.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:11:53.483 00:11:53.483 --- 10.0.0.2 ping statistics --- 00:11:53.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.483 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:53.483 04:15:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:53.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:53.483 00:11:53.483 --- 10.0.0.3 ping statistics --- 00:11:53.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.483 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:53.483 04:15:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:11:53.483 00:11:53.483 --- 10.0.0.1 ping statistics --- 00:11:53.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.483 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:53.483 04:15:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.483 04:15:05 -- nvmf/common.sh@421 -- # return 0 00:11:53.483 04:15:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:53.483 04:15:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.483 04:15:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:53.483 04:15:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:53.483 04:15:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.483 04:15:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:53.483 04:15:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:53.483 04:15:05 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:53.483 04:15:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:53.483 04:15:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.483 04:15:05 -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 04:15:05 -- nvmf/common.sh@469 -- # nvmfpid=76925 00:11:53.483 04:15:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:53.483 04:15:05 -- nvmf/common.sh@470 -- # waitforlisten 76925 00:11:53.483 04:15:05 -- common/autotest_common.sh@829 -- # '[' -z 76925 ']' 00:11:53.483 04:15:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.483 04:15:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.483 04:15:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.483 04:15:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.483 04:15:05 -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 [2024-12-06 04:15:05.944963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:53.483 [2024-12-06 04:15:05.945073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.741 [2024-12-06 04:15:06.089540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.741 [2024-12-06 04:15:06.168259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:53.741 [2024-12-06 04:15:06.168473] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.741 [2024-12-06 04:15:06.168490] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.741 [2024-12-06 04:15:06.168501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.741 [2024-12-06 04:15:06.168531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.674 04:15:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.674 04:15:06 -- common/autotest_common.sh@862 -- # return 0 00:11:54.674 04:15:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:54.675 04:15:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:54.675 04:15:06 -- common/autotest_common.sh@10 -- # set +x 00:11:54.675 04:15:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.675 04:15:07 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:54.675 04:15:07 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:54.934 true 00:11:54.934 04:15:07 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:54.934 04:15:07 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:55.194 04:15:07 -- target/tls.sh@82 -- # version=0 00:11:55.194 04:15:07 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:55.194 04:15:07 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:55.194 04:15:07 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:55.194 04:15:07 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:55.763 04:15:08 -- target/tls.sh@90 -- # version=13 00:11:55.763 04:15:08 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:55.763 04:15:08 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:56.022 04:15:08 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:56.022 04:15:08 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:56.022 04:15:08 -- target/tls.sh@98 -- # version=7 00:11:56.022 04:15:08 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:56.022 04:15:08 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:56.022 04:15:08 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:56.281 04:15:08 -- target/tls.sh@105 -- # ktls=false 00:11:56.281 04:15:08 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:56.281 04:15:08 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:56.540 04:15:09 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:56.540 04:15:09 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:56.800 04:15:09 -- target/tls.sh@113 -- # ktls=true 00:11:56.800 04:15:09 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:56.800 04:15:09 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:57.059 04:15:09 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:57.059 04:15:09 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:57.628 04:15:09 -- target/tls.sh@121 -- # ktls=false 00:11:57.628 04:15:09 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:57.628 04:15:09 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:57.628 04:15:09 -- target/tls.sh@49 -- # local key hash crc 00:11:57.628 04:15:09 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:57.628 04:15:09 -- target/tls.sh@51 -- # hash=01 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # gzip -1 -c 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # tail -c8 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # head -c 4 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # crc='p$H�' 00:11:57.628 04:15:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:57.628 04:15:09 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:57.628 04:15:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:57.628 04:15:09 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:57.628 04:15:09 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:57.628 04:15:09 -- target/tls.sh@49 -- # local key hash crc 00:11:57.628 04:15:09 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:57.628 04:15:09 -- target/tls.sh@51 -- # hash=01 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # gzip -1 -c 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # head -c 4 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # tail -c8 00:11:57.628 04:15:09 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:57.628 04:15:09 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:57.628 04:15:09 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:57.629 04:15:09 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:57.629 04:15:09 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:57.629 04:15:09 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:57.629 04:15:09 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:57.629 04:15:09 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:57.629 04:15:09 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:57.629 04:15:09 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:57.629 04:15:09 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:57.629 04:15:09 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:57.887 04:15:10 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:58.147 04:15:10 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:58.147 04:15:10 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:58.147 04:15:10 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:58.406 [2024-12-06 04:15:10.886602] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.406 04:15:10 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:58.664 04:15:11 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:58.924 [2024-12-06 04:15:11.326718] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:58.924 [2024-12-06 04:15:11.326981] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.924 04:15:11 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:59.184 malloc0 00:11:59.184 04:15:11 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:59.443 04:15:11 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:59.702 04:15:12 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:11.937 Initializing NVMe Controllers 00:12:11.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:11.937 Initialization complete. Launching workers. 00:12:11.937 ======================================================== 00:12:11.937 Latency(us) 00:12:11.937 Device Information : IOPS MiB/s Average min max 00:12:11.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10703.09 41.81 5980.80 1414.66 10672.91 00:12:11.937 ======================================================== 00:12:11.937 Total : 10703.09 41.81 5980.80 1414.66 10672.91 00:12:11.937 00:12:11.937 04:15:22 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:11.937 04:15:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:11.937 04:15:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:11.937 04:15:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:11.937 04:15:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:11.937 04:15:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:11.937 04:15:22 -- target/tls.sh@28 -- # bdevperf_pid=77173 00:12:11.937 04:15:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:11.937 04:15:22 -- target/tls.sh@31 -- # waitforlisten 77173 /var/tmp/bdevperf.sock 00:12:11.937 04:15:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:11.937 04:15:22 -- common/autotest_common.sh@829 -- # '[' -z 77173 ']' 00:12:11.937 04:15:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.937 04:15:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.937 04:15:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.937 04:15:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.937 04:15:22 -- common/autotest_common.sh@10 -- # set +x 00:12:11.937 [2024-12-06 04:15:22.395026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.937 [2024-12-06 04:15:22.395125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77173 ] 00:12:11.937 [2024-12-06 04:15:22.537691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.937 [2024-12-06 04:15:22.619851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.937 04:15:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.937 04:15:23 -- common/autotest_common.sh@862 -- # return 0 00:12:11.937 04:15:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:11.937 [2024-12-06 04:15:23.587890] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:11.937 TLSTESTn1 00:12:11.937 04:15:23 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:11.937 Running I/O for 10 seconds... 00:12:21.968 00:12:21.968 Latency(us) 00:12:21.968 [2024-12-06T04:15:34.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.968 [2024-12-06T04:15:34.533Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:21.968 Verification LBA range: start 0x0 length 0x2000 00:12:21.968 TLSTESTn1 : 10.01 6223.46 24.31 0.00 0.00 20532.62 4706.68 30980.65 00:12:21.968 [2024-12-06T04:15:34.533Z] =================================================================================================================== 00:12:21.968 [2024-12-06T04:15:34.533Z] Total : 6223.46 24.31 0.00 0.00 20532.62 4706.68 30980.65 00:12:21.968 0 00:12:21.968 04:15:33 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:21.968 04:15:33 -- target/tls.sh@45 -- # killprocess 77173 00:12:21.968 04:15:33 -- common/autotest_common.sh@936 -- # '[' -z 77173 ']' 00:12:21.968 04:15:33 -- common/autotest_common.sh@940 -- # kill -0 77173 00:12:21.968 04:15:33 -- common/autotest_common.sh@941 -- # uname 00:12:21.968 04:15:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.968 04:15:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77173 00:12:21.968 killing process with pid 77173 00:12:21.968 Received shutdown signal, test time was about 10.000000 seconds 00:12:21.968 00:12:21.968 Latency(us) 00:12:21.968 [2024-12-06T04:15:34.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.968 [2024-12-06T04:15:34.533Z] =================================================================================================================== 00:12:21.968 [2024-12-06T04:15:34.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:21.968 04:15:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:21.968 04:15:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:21.968 04:15:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77173' 00:12:21.968 04:15:33 -- common/autotest_common.sh@955 -- # kill 77173 00:12:21.968 04:15:33 -- common/autotest_common.sh@960 -- # wait 77173 00:12:21.968 04:15:34 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:21.968 04:15:34 -- common/autotest_common.sh@650 -- # local es=0 00:12:21.969 04:15:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:21.969 04:15:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:21.969 04:15:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.969 04:15:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:21.969 04:15:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.969 04:15:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:21.969 04:15:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:21.969 04:15:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:21.969 04:15:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:21.969 04:15:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:21.969 04:15:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:21.969 04:15:34 -- target/tls.sh@28 -- # bdevperf_pid=77307 00:12:21.969 04:15:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:21.969 04:15:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:21.969 04:15:34 -- target/tls.sh@31 -- # waitforlisten 77307 /var/tmp/bdevperf.sock 00:12:21.969 04:15:34 -- common/autotest_common.sh@829 -- # '[' -z 77307 ']' 00:12:21.969 04:15:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:21.969 04:15:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.969 04:15:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:21.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:21.969 04:15:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.969 04:15:34 -- common/autotest_common.sh@10 -- # set +x 00:12:21.969 [2024-12-06 04:15:34.220826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:21.969 [2024-12-06 04:15:34.220950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77307 ] 00:12:21.969 [2024-12-06 04:15:34.358558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.969 [2024-12-06 04:15:34.444205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.904 04:15:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.904 04:15:35 -- common/autotest_common.sh@862 -- # return 0 00:12:22.904 04:15:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:23.161 [2024-12-06 04:15:35.497881] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:23.161 [2024-12-06 04:15:35.502992] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:23.161 [2024-12-06 04:15:35.503549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc44f90 (107): Transport endpoint is not connected 00:12:23.162 [2024-12-06 04:15:35.504536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc44f90 (9): Bad file descriptor 00:12:23.162 [2024-12-06 04:15:35.505532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:23.162 [2024-12-06 04:15:35.505557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:23.162 [2024-12-06 04:15:35.505568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:23.162 request: 00:12:23.162 { 00:12:23.162 "name": "TLSTEST", 00:12:23.162 "trtype": "tcp", 00:12:23.162 "traddr": "10.0.0.2", 00:12:23.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.162 "adrfam": "ipv4", 00:12:23.162 "trsvcid": "4420", 00:12:23.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.162 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:23.162 "method": "bdev_nvme_attach_controller", 00:12:23.162 "req_id": 1 00:12:23.162 } 00:12:23.162 Got JSON-RPC error response 00:12:23.162 response: 00:12:23.162 { 00:12:23.162 "code": -32602, 00:12:23.162 "message": "Invalid parameters" 00:12:23.162 } 00:12:23.162 04:15:35 -- target/tls.sh@36 -- # killprocess 77307 00:12:23.162 04:15:35 -- common/autotest_common.sh@936 -- # '[' -z 77307 ']' 00:12:23.162 04:15:35 -- common/autotest_common.sh@940 -- # kill -0 77307 00:12:23.162 04:15:35 -- common/autotest_common.sh@941 -- # uname 00:12:23.162 04:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:23.162 04:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77307 00:12:23.162 killing process with pid 77307 00:12:23.162 Received shutdown signal, test time was about 10.000000 seconds 00:12:23.162 00:12:23.162 Latency(us) 00:12:23.162 [2024-12-06T04:15:35.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.162 [2024-12-06T04:15:35.727Z] =================================================================================================================== 00:12:23.162 [2024-12-06T04:15:35.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:23.162 04:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:23.162 04:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:23.162 04:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77307' 00:12:23.162 04:15:35 -- common/autotest_common.sh@955 -- # kill 77307 00:12:23.162 04:15:35 -- common/autotest_common.sh@960 -- # wait 77307 00:12:23.420 04:15:35 -- target/tls.sh@37 -- # return 1 00:12:23.420 04:15:35 -- common/autotest_common.sh@653 -- # es=1 00:12:23.420 04:15:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:23.420 04:15:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:23.420 04:15:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:23.420 04:15:35 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.420 04:15:35 -- common/autotest_common.sh@650 -- # local es=0 00:12:23.420 04:15:35 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.420 04:15:35 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:23.420 04:15:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.420 04:15:35 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:23.420 04:15:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:23.420 04:15:35 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.420 04:15:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:23.420 04:15:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:23.420 04:15:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:23.420 04:15:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:23.420 04:15:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:23.420 04:15:35 -- target/tls.sh@28 -- # bdevperf_pid=77334 00:12:23.420 04:15:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:23.420 04:15:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:23.420 04:15:35 -- target/tls.sh@31 -- # waitforlisten 77334 /var/tmp/bdevperf.sock 00:12:23.420 04:15:35 -- common/autotest_common.sh@829 -- # '[' -z 77334 ']' 00:12:23.420 04:15:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:23.420 04:15:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:23.420 04:15:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:23.420 04:15:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.420 04:15:35 -- common/autotest_common.sh@10 -- # set +x 00:12:23.420 [2024-12-06 04:15:35.919491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:23.420 [2024-12-06 04:15:35.919607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77334 ] 00:12:23.678 [2024-12-06 04:15:36.053051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.678 [2024-12-06 04:15:36.137048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.614 04:15:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.614 04:15:36 -- common/autotest_common.sh@862 -- # return 0 00:12:24.614 04:15:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:24.872 [2024-12-06 04:15:37.189451] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:24.872 [2024-12-06 04:15:37.194537] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:24.872 [2024-12-06 04:15:37.194583] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:24.872 [2024-12-06 04:15:37.194667] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:24.872 [2024-12-06 04:15:37.195186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234af90 (107): Transport endpoint is not connected 00:12:24.872 [2024-12-06 04:15:37.196176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234af90 (9): Bad file descriptor 00:12:24.872 [2024-12-06 04:15:37.197172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:24.872 [2024-12-06 04:15:37.197194] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:24.872 [2024-12-06 04:15:37.197220] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:24.872 request: 00:12:24.872 { 00:12:24.872 "name": "TLSTEST", 00:12:24.872 "trtype": "tcp", 00:12:24.872 "traddr": "10.0.0.2", 00:12:24.872 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:24.872 "adrfam": "ipv4", 00:12:24.872 "trsvcid": "4420", 00:12:24.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.872 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:24.872 "method": "bdev_nvme_attach_controller", 00:12:24.872 "req_id": 1 00:12:24.872 } 00:12:24.872 Got JSON-RPC error response 00:12:24.872 response: 00:12:24.872 { 00:12:24.872 "code": -32602, 00:12:24.872 "message": "Invalid parameters" 00:12:24.872 } 00:12:24.872 04:15:37 -- target/tls.sh@36 -- # killprocess 77334 00:12:24.872 04:15:37 -- common/autotest_common.sh@936 -- # '[' -z 77334 ']' 00:12:24.872 04:15:37 -- common/autotest_common.sh@940 -- # kill -0 77334 00:12:24.872 04:15:37 -- common/autotest_common.sh@941 -- # uname 00:12:24.872 04:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:24.872 04:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77334 00:12:24.872 killing process with pid 77334 00:12:24.872 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.872 00:12:24.872 Latency(us) 00:12:24.872 [2024-12-06T04:15:37.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.872 [2024-12-06T04:15:37.437Z] =================================================================================================================== 00:12:24.872 [2024-12-06T04:15:37.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:24.873 04:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:24.873 04:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:24.873 04:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77334' 00:12:24.873 04:15:37 -- common/autotest_common.sh@955 -- # kill 77334 00:12:24.873 04:15:37 -- common/autotest_common.sh@960 -- # wait 77334 00:12:25.132 04:15:37 -- target/tls.sh@37 -- # return 1 00:12:25.132 04:15:37 -- common/autotest_common.sh@653 -- # es=1 00:12:25.132 04:15:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.132 04:15:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.132 04:15:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.132 04:15:37 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:25.132 04:15:37 -- common/autotest_common.sh@650 -- # local es=0 00:12:25.132 04:15:37 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:25.132 04:15:37 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:25.132 04:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.132 04:15:37 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:25.132 04:15:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.132 04:15:37 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:25.132 04:15:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:25.132 04:15:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:25.132 04:15:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:25.132 04:15:37 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:25.132 04:15:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.132 04:15:37 -- target/tls.sh@28 -- # bdevperf_pid=77362 00:12:25.132 04:15:37 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:25.132 04:15:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:25.132 04:15:37 -- target/tls.sh@31 -- # waitforlisten 77362 /var/tmp/bdevperf.sock 00:12:25.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:25.132 04:15:37 -- common/autotest_common.sh@829 -- # '[' -z 77362 ']' 00:12:25.132 04:15:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:25.132 04:15:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.132 04:15:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:25.132 04:15:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.132 04:15:37 -- common/autotest_common.sh@10 -- # set +x 00:12:25.132 [2024-12-06 04:15:37.615721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:25.132 [2024-12-06 04:15:37.615823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77362 ] 00:12:25.391 [2024-12-06 04:15:37.756443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.391 [2024-12-06 04:15:37.853832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.327 04:15:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.327 04:15:38 -- common/autotest_common.sh@862 -- # return 0 00:12:26.327 04:15:38 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:26.586 [2024-12-06 04:15:38.932938] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:26.586 [2024-12-06 04:15:38.944619] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:26.586 [2024-12-06 04:15:38.944672] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:26.586 [2024-12-06 04:15:38.944751] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:26.586 [2024-12-06 04:15:38.944806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1900f90 (107): Transport endpoint is not connected 00:12:26.587 [2024-12-06 04:15:38.945792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1900f90 (9): Bad file descriptor 00:12:26.587 [2024-12-06 04:15:38.946788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:26.587 [2024-12-06 04:15:38.946822] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:26.587 [2024-12-06 04:15:38.946833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:26.587 request: 00:12:26.587 { 00:12:26.587 "name": "TLSTEST", 00:12:26.587 "trtype": "tcp", 00:12:26.587 "traddr": "10.0.0.2", 00:12:26.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:26.587 "adrfam": "ipv4", 00:12:26.587 "trsvcid": "4420", 00:12:26.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:26.587 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:26.587 "method": "bdev_nvme_attach_controller", 00:12:26.587 "req_id": 1 00:12:26.587 } 00:12:26.587 Got JSON-RPC error response 00:12:26.587 response: 00:12:26.587 { 00:12:26.587 "code": -32602, 00:12:26.587 "message": "Invalid parameters" 00:12:26.587 } 00:12:26.587 04:15:38 -- target/tls.sh@36 -- # killprocess 77362 00:12:26.587 04:15:38 -- common/autotest_common.sh@936 -- # '[' -z 77362 ']' 00:12:26.587 04:15:38 -- common/autotest_common.sh@940 -- # kill -0 77362 00:12:26.587 04:15:38 -- common/autotest_common.sh@941 -- # uname 00:12:26.587 04:15:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.587 04:15:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77362 00:12:26.587 killing process with pid 77362 00:12:26.587 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.587 00:12:26.587 Latency(us) 00:12:26.587 [2024-12-06T04:15:39.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.587 [2024-12-06T04:15:39.152Z] =================================================================================================================== 00:12:26.587 [2024-12-06T04:15:39.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:26.587 04:15:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:26.587 04:15:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:26.587 04:15:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77362' 00:12:26.587 04:15:39 -- common/autotest_common.sh@955 -- # kill 77362 00:12:26.587 04:15:39 -- common/autotest_common.sh@960 -- # wait 77362 00:12:26.846 04:15:39 -- target/tls.sh@37 -- # return 1 00:12:26.846 04:15:39 -- common/autotest_common.sh@653 -- # es=1 00:12:26.846 04:15:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.846 04:15:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.846 04:15:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.846 04:15:39 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:26.846 04:15:39 -- common/autotest_common.sh@650 -- # local es=0 00:12:26.846 04:15:39 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:26.846 04:15:39 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:26.846 04:15:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.846 04:15:39 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:26.846 04:15:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.846 04:15:39 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:26.846 04:15:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:26.846 04:15:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:26.846 04:15:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:26.846 04:15:39 -- target/tls.sh@23 -- # psk= 00:12:26.846 04:15:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:26.846 04:15:39 -- target/tls.sh@28 -- # bdevperf_pid=77395 00:12:26.846 04:15:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:26.846 04:15:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:26.846 04:15:39 -- target/tls.sh@31 -- # waitforlisten 77395 /var/tmp/bdevperf.sock 00:12:26.846 04:15:39 -- common/autotest_common.sh@829 -- # '[' -z 77395 ']' 00:12:26.846 04:15:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.846 04:15:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.846 04:15:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.846 04:15:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.846 04:15:39 -- common/autotest_common.sh@10 -- # set +x 00:12:26.846 [2024-12-06 04:15:39.388128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:26.846 [2024-12-06 04:15:39.388224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77395 ] 00:12:27.104 [2024-12-06 04:15:39.528442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.104 [2024-12-06 04:15:39.647133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.041 04:15:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.041 04:15:40 -- common/autotest_common.sh@862 -- # return 0 00:12:28.041 04:15:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:28.300 [2024-12-06 04:15:40.685927] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:28.301 [2024-12-06 04:15:40.687838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1598c20 (9): Bad file descriptor 00:12:28.301 [2024-12-06 04:15:40.688830] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:28.301 [2024-12-06 04:15:40.688868] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:28.301 [2024-12-06 04:15:40.688893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:28.301 request: 00:12:28.301 { 00:12:28.301 "name": "TLSTEST", 00:12:28.301 "trtype": "tcp", 00:12:28.301 "traddr": "10.0.0.2", 00:12:28.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:28.301 "adrfam": "ipv4", 00:12:28.301 "trsvcid": "4420", 00:12:28.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.301 "method": "bdev_nvme_attach_controller", 00:12:28.301 "req_id": 1 00:12:28.301 } 00:12:28.301 Got JSON-RPC error response 00:12:28.301 response: 00:12:28.301 { 00:12:28.301 "code": -32602, 00:12:28.301 "message": "Invalid parameters" 00:12:28.301 } 00:12:28.301 04:15:40 -- target/tls.sh@36 -- # killprocess 77395 00:12:28.301 04:15:40 -- common/autotest_common.sh@936 -- # '[' -z 77395 ']' 00:12:28.301 04:15:40 -- common/autotest_common.sh@940 -- # kill -0 77395 00:12:28.301 04:15:40 -- common/autotest_common.sh@941 -- # uname 00:12:28.301 04:15:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.301 04:15:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77395 00:12:28.301 killing process with pid 77395 00:12:28.301 Received shutdown signal, test time was about 10.000000 seconds 00:12:28.301 00:12:28.301 Latency(us) 00:12:28.301 [2024-12-06T04:15:40.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.301 [2024-12-06T04:15:40.866Z] =================================================================================================================== 00:12:28.301 [2024-12-06T04:15:40.866Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:28.301 04:15:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:28.301 04:15:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:28.301 04:15:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77395' 00:12:28.301 04:15:40 -- common/autotest_common.sh@955 -- # kill 77395 00:12:28.301 04:15:40 -- common/autotest_common.sh@960 -- # wait 77395 00:12:28.560 04:15:41 -- target/tls.sh@37 -- # return 1 00:12:28.560 04:15:41 -- common/autotest_common.sh@653 -- # es=1 00:12:28.560 04:15:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.560 04:15:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.560 04:15:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.560 04:15:41 -- target/tls.sh@167 -- # killprocess 76925 00:12:28.560 04:15:41 -- common/autotest_common.sh@936 -- # '[' -z 76925 ']' 00:12:28.560 04:15:41 -- common/autotest_common.sh@940 -- # kill -0 76925 00:12:28.560 04:15:41 -- common/autotest_common.sh@941 -- # uname 00:12:28.560 04:15:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.560 04:15:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76925 00:12:28.560 killing process with pid 76925 00:12:28.560 04:15:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:28.560 04:15:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:28.560 04:15:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76925' 00:12:28.560 04:15:41 -- common/autotest_common.sh@955 -- # kill 76925 00:12:28.560 04:15:41 -- common/autotest_common.sh@960 -- # wait 76925 00:12:28.820 04:15:41 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:28.820 04:15:41 -- target/tls.sh@49 -- # local key hash crc 00:12:28.820 04:15:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:28.820 04:15:41 -- target/tls.sh@51 -- # hash=02 00:12:28.820 04:15:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:28.820 04:15:41 -- target/tls.sh@52 -- # gzip -1 -c 00:12:28.820 04:15:41 -- target/tls.sh@52 -- # tail -c8 00:12:28.820 04:15:41 -- target/tls.sh@52 -- # head -c 4 00:12:28.820 04:15:41 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:28.820 04:15:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:28.820 04:15:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:28.820 04:15:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:28.820 04:15:41 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:28.820 04:15:41 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:28.820 04:15:41 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:28.820 04:15:41 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:28.820 04:15:41 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:28.820 04:15:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:28.820 04:15:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:28.820 04:15:41 -- common/autotest_common.sh@10 -- # set +x 00:12:28.820 04:15:41 -- nvmf/common.sh@469 -- # nvmfpid=77443 00:12:28.820 04:15:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:28.820 04:15:41 -- nvmf/common.sh@470 -- # waitforlisten 77443 00:12:28.820 04:15:41 -- common/autotest_common.sh@829 -- # '[' -z 77443 ']' 00:12:28.820 04:15:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.820 04:15:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:28.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.820 04:15:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.820 04:15:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:28.820 04:15:41 -- common/autotest_common.sh@10 -- # set +x 00:12:29.080 [2024-12-06 04:15:41.388197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:29.080 [2024-12-06 04:15:41.388849] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.080 [2024-12-06 04:15:41.531329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.080 [2024-12-06 04:15:41.617629] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:29.080 [2024-12-06 04:15:41.617804] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.080 [2024-12-06 04:15:41.617817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.080 [2024-12-06 04:15:41.617826] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.080 [2024-12-06 04:15:41.617862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.018 04:15:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.018 04:15:42 -- common/autotest_common.sh@862 -- # return 0 00:12:30.018 04:15:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:30.018 04:15:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.018 04:15:42 -- common/autotest_common.sh@10 -- # set +x 00:12:30.018 04:15:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.018 04:15:42 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:30.018 04:15:42 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:30.018 04:15:42 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:30.277 [2024-12-06 04:15:42.650360] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.277 04:15:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:30.536 04:15:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:30.795 [2024-12-06 04:15:43.130784] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:30.795 [2024-12-06 04:15:43.131128] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.795 04:15:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:31.054 malloc0 00:12:31.054 04:15:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:31.313 04:15:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:31.573 04:15:43 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:31.573 04:15:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:31.573 04:15:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:31.574 04:15:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:31.574 04:15:43 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:31.574 04:15:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:31.574 04:15:43 -- target/tls.sh@28 -- # bdevperf_pid=77492 00:12:31.574 04:15:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:31.574 04:15:43 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:31.574 04:15:43 -- target/tls.sh@31 -- # waitforlisten 77492 /var/tmp/bdevperf.sock 00:12:31.574 04:15:43 -- common/autotest_common.sh@829 -- # '[' -z 77492 ']' 00:12:31.574 04:15:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.574 04:15:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.574 04:15:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.574 04:15:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.574 04:15:43 -- common/autotest_common.sh@10 -- # set +x 00:12:31.574 [2024-12-06 04:15:43.936721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:31.574 [2024-12-06 04:15:43.936823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77492 ] 00:12:31.574 [2024-12-06 04:15:44.078974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.833 [2024-12-06 04:15:44.168975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.401 04:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.401 04:15:44 -- common/autotest_common.sh@862 -- # return 0 00:12:32.401 04:15:44 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:32.661 [2024-12-06 04:15:45.107753] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:32.661 TLSTESTn1 00:12:32.661 04:15:45 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:32.919 Running I/O for 10 seconds... 00:12:42.893 00:12:42.894 Latency(us) 00:12:42.894 [2024-12-06T04:15:55.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.894 [2024-12-06T04:15:55.459Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:42.894 Verification LBA range: start 0x0 length 0x2000 00:12:42.894 TLSTESTn1 : 10.01 5726.53 22.37 0.00 0.00 22321.87 2546.97 23831.27 00:12:42.894 [2024-12-06T04:15:55.459Z] =================================================================================================================== 00:12:42.894 [2024-12-06T04:15:55.459Z] Total : 5726.53 22.37 0.00 0.00 22321.87 2546.97 23831.27 00:12:42.894 0 00:12:42.894 04:15:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:42.894 04:15:55 -- target/tls.sh@45 -- # killprocess 77492 00:12:42.894 04:15:55 -- common/autotest_common.sh@936 -- # '[' -z 77492 ']' 00:12:42.894 04:15:55 -- common/autotest_common.sh@940 -- # kill -0 77492 00:12:42.894 04:15:55 -- common/autotest_common.sh@941 -- # uname 00:12:42.894 04:15:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.894 04:15:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77492 00:12:42.894 killing process with pid 77492 00:12:42.894 Received shutdown signal, test time was about 10.000000 seconds 00:12:42.894 00:12:42.894 Latency(us) 00:12:42.894 [2024-12-06T04:15:55.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.894 [2024-12-06T04:15:55.459Z] =================================================================================================================== 00:12:42.894 [2024-12-06T04:15:55.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:42.894 04:15:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:42.894 04:15:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:42.894 04:15:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77492' 00:12:42.894 04:15:55 -- common/autotest_common.sh@955 -- # kill 77492 00:12:42.894 04:15:55 -- common/autotest_common.sh@960 -- # wait 77492 00:12:43.153 04:15:55 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:43.153 04:15:55 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:43.153 04:15:55 -- common/autotest_common.sh@650 -- # local es=0 00:12:43.153 04:15:55 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:43.153 04:15:55 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:43.153 04:15:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.153 04:15:55 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:43.153 04:15:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.153 04:15:55 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:43.153 04:15:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:43.153 04:15:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:43.153 04:15:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:43.153 04:15:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:43.153 04:15:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.153 04:15:55 -- target/tls.sh@28 -- # bdevperf_pid=77632 00:12:43.153 04:15:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:43.153 04:15:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:43.153 04:15:55 -- target/tls.sh@31 -- # waitforlisten 77632 /var/tmp/bdevperf.sock 00:12:43.153 04:15:55 -- common/autotest_common.sh@829 -- # '[' -z 77632 ']' 00:12:43.153 04:15:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.153 04:15:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.153 04:15:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.153 04:15:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.153 04:15:55 -- common/autotest_common.sh@10 -- # set +x 00:12:43.153 [2024-12-06 04:15:55.651461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:43.153 [2024-12-06 04:15:55.651568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77632 ] 00:12:43.413 [2024-12-06 04:15:55.791109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.413 [2024-12-06 04:15:55.869487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.357 04:15:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.357 04:15:56 -- common/autotest_common.sh@862 -- # return 0 00:12:44.357 04:15:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:44.357 [2024-12-06 04:15:56.835720] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:44.357 [2024-12-06 04:15:56.835772] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:44.357 request: 00:12:44.357 { 00:12:44.357 "name": "TLSTEST", 00:12:44.357 "trtype": "tcp", 00:12:44.357 "traddr": "10.0.0.2", 00:12:44.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:44.357 "adrfam": "ipv4", 00:12:44.357 "trsvcid": "4420", 00:12:44.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:44.357 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:44.357 "method": "bdev_nvme_attach_controller", 00:12:44.357 "req_id": 1 00:12:44.357 } 00:12:44.357 Got JSON-RPC error response 00:12:44.357 response: 00:12:44.357 { 00:12:44.357 "code": -22, 00:12:44.357 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:44.357 } 00:12:44.357 04:15:56 -- target/tls.sh@36 -- # killprocess 77632 00:12:44.357 04:15:56 -- common/autotest_common.sh@936 -- # '[' -z 77632 ']' 00:12:44.357 04:15:56 -- common/autotest_common.sh@940 -- # kill -0 77632 00:12:44.357 04:15:56 -- common/autotest_common.sh@941 -- # uname 00:12:44.357 04:15:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.357 04:15:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77632 00:12:44.357 04:15:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:44.357 04:15:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:44.357 04:15:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77632' 00:12:44.357 killing process with pid 77632 00:12:44.357 04:15:56 -- common/autotest_common.sh@955 -- # kill 77632 00:12:44.357 Received shutdown signal, test time was about 10.000000 seconds 00:12:44.357 00:12:44.357 Latency(us) 00:12:44.357 [2024-12-06T04:15:56.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.357 [2024-12-06T04:15:56.922Z] =================================================================================================================== 00:12:44.357 [2024-12-06T04:15:56.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:44.357 04:15:56 -- common/autotest_common.sh@960 -- # wait 77632 00:12:44.616 04:15:57 -- target/tls.sh@37 -- # return 1 00:12:44.616 04:15:57 -- common/autotest_common.sh@653 -- # es=1 00:12:44.616 04:15:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.616 04:15:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.616 04:15:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.616 04:15:57 -- target/tls.sh@183 -- # killprocess 77443 00:12:44.616 04:15:57 -- common/autotest_common.sh@936 -- # '[' -z 77443 ']' 00:12:44.616 04:15:57 -- common/autotest_common.sh@940 -- # kill -0 77443 00:12:44.616 04:15:57 -- common/autotest_common.sh@941 -- # uname 00:12:44.616 04:15:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.616 04:15:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77443 00:12:44.616 killing process with pid 77443 00:12:44.616 04:15:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:44.616 04:15:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:44.616 04:15:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77443' 00:12:44.616 04:15:57 -- common/autotest_common.sh@955 -- # kill 77443 00:12:44.616 04:15:57 -- common/autotest_common.sh@960 -- # wait 77443 00:12:44.874 04:15:57 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:44.874 04:15:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:44.874 04:15:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.874 04:15:57 -- common/autotest_common.sh@10 -- # set +x 00:12:44.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.874 04:15:57 -- nvmf/common.sh@469 -- # nvmfpid=77659 00:12:44.874 04:15:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.874 04:15:57 -- nvmf/common.sh@470 -- # waitforlisten 77659 00:12:44.874 04:15:57 -- common/autotest_common.sh@829 -- # '[' -z 77659 ']' 00:12:44.874 04:15:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.874 04:15:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.874 04:15:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.874 04:15:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.874 04:15:57 -- common/autotest_common.sh@10 -- # set +x 00:12:44.874 [2024-12-06 04:15:57.387132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:44.874 [2024-12-06 04:15:57.387461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.132 [2024-12-06 04:15:57.518941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.132 [2024-12-06 04:15:57.605261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:45.132 [2024-12-06 04:15:57.605609] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.132 [2024-12-06 04:15:57.605742] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.132 [2024-12-06 04:15:57.605885] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.132 [2024-12-06 04:15:57.606095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.070 04:15:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.070 04:15:58 -- common/autotest_common.sh@862 -- # return 0 00:12:46.070 04:15:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:46.070 04:15:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.070 04:15:58 -- common/autotest_common.sh@10 -- # set +x 00:12:46.070 04:15:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.070 04:15:58 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:46.070 04:15:58 -- common/autotest_common.sh@650 -- # local es=0 00:12:46.070 04:15:58 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:46.070 04:15:58 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:12:46.070 04:15:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.070 04:15:58 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:12:46.070 04:15:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.070 04:15:58 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:46.070 04:15:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:46.070 04:15:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:46.329 [2024-12-06 04:15:58.650524] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.329 04:15:58 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:46.329 04:15:58 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:46.897 [2024-12-06 04:15:59.158710] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:46.897 [2024-12-06 04:15:59.158995] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.897 04:15:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:46.897 malloc0 00:12:47.156 04:15:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:47.156 04:15:59 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:47.415 [2024-12-06 04:15:59.929272] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:47.415 [2024-12-06 04:15:59.929330] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:47.415 [2024-12-06 04:15:59.929365] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:47.415 request: 00:12:47.415 { 00:12:47.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.415 "host": "nqn.2016-06.io.spdk:host1", 00:12:47.415 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:47.415 "method": "nvmf_subsystem_add_host", 00:12:47.415 "req_id": 1 00:12:47.415 } 00:12:47.415 Got JSON-RPC error response 00:12:47.415 response: 00:12:47.415 { 00:12:47.415 "code": -32603, 00:12:47.415 "message": "Internal error" 00:12:47.415 } 00:12:47.415 04:15:59 -- common/autotest_common.sh@653 -- # es=1 00:12:47.415 04:15:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.415 04:15:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.415 04:15:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.415 04:15:59 -- target/tls.sh@189 -- # killprocess 77659 00:12:47.415 04:15:59 -- common/autotest_common.sh@936 -- # '[' -z 77659 ']' 00:12:47.415 04:15:59 -- common/autotest_common.sh@940 -- # kill -0 77659 00:12:47.415 04:15:59 -- common/autotest_common.sh@941 -- # uname 00:12:47.415 04:15:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.415 04:15:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77659 00:12:47.415 04:15:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:47.674 killing process with pid 77659 00:12:47.674 04:15:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:47.674 04:15:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77659' 00:12:47.674 04:15:59 -- common/autotest_common.sh@955 -- # kill 77659 00:12:47.674 04:15:59 -- common/autotest_common.sh@960 -- # wait 77659 00:12:47.674 04:16:00 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:47.674 04:16:00 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:47.674 04:16:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:47.674 04:16:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.674 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:12:47.674 04:16:00 -- nvmf/common.sh@469 -- # nvmfpid=77727 00:12:47.674 04:16:00 -- nvmf/common.sh@470 -- # waitforlisten 77727 00:12:47.674 04:16:00 -- common/autotest_common.sh@829 -- # '[' -z 77727 ']' 00:12:47.674 04:16:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.674 04:16:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.674 04:16:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.674 04:16:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:47.674 04:16:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.674 04:16:00 -- common/autotest_common.sh@10 -- # set +x 00:12:47.936 [2024-12-06 04:16:00.249431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:47.936 [2024-12-06 04:16:00.249539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.936 [2024-12-06 04:16:00.390532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.936 [2024-12-06 04:16:00.471383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:47.936 [2024-12-06 04:16:00.471561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.936 [2024-12-06 04:16:00.471574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.936 [2024-12-06 04:16:00.471583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.936 [2024-12-06 04:16:00.471613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.908 04:16:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.908 04:16:01 -- common/autotest_common.sh@862 -- # return 0 00:12:48.908 04:16:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:48.908 04:16:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.908 04:16:01 -- common/autotest_common.sh@10 -- # set +x 00:12:48.909 04:16:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.909 04:16:01 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.909 04:16:01 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.909 04:16:01 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:49.168 [2024-12-06 04:16:01.508771] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.168 04:16:01 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:49.427 04:16:01 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:49.685 [2024-12-06 04:16:02.052949] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:49.685 [2024-12-06 04:16:02.053191] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.685 04:16:02 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:49.944 malloc0 00:12:49.944 04:16:02 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:50.203 04:16:02 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:50.462 04:16:02 -- target/tls.sh@197 -- # bdevperf_pid=77782 00:12:50.462 04:16:02 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:50.462 04:16:02 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:50.462 04:16:02 -- target/tls.sh@200 -- # waitforlisten 77782 /var/tmp/bdevperf.sock 00:12:50.462 04:16:02 -- common/autotest_common.sh@829 -- # '[' -z 77782 ']' 00:12:50.462 04:16:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:50.462 04:16:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:50.462 04:16:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:50.462 04:16:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.462 04:16:02 -- common/autotest_common.sh@10 -- # set +x 00:12:50.462 [2024-12-06 04:16:02.851741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:50.462 [2024-12-06 04:16:02.851841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77782 ] 00:12:50.462 [2024-12-06 04:16:02.989746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.722 [2024-12-06 04:16:03.084336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.290 04:16:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.290 04:16:03 -- common/autotest_common.sh@862 -- # return 0 00:12:51.290 04:16:03 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:51.549 [2024-12-06 04:16:04.012970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:51.549 TLSTESTn1 00:12:51.549 04:16:04 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:52.117 04:16:04 -- target/tls.sh@205 -- # tgtconf='{ 00:12:52.117 "subsystems": [ 00:12:52.117 { 00:12:52.117 "subsystem": "iobuf", 00:12:52.117 "config": [ 00:12:52.117 { 00:12:52.117 "method": "iobuf_set_options", 00:12:52.117 "params": { 00:12:52.117 "small_pool_count": 8192, 00:12:52.117 "large_pool_count": 1024, 00:12:52.117 "small_bufsize": 8192, 00:12:52.117 "large_bufsize": 135168 00:12:52.117 } 00:12:52.117 } 00:12:52.117 ] 00:12:52.117 }, 00:12:52.117 { 00:12:52.117 "subsystem": "sock", 00:12:52.117 "config": [ 00:12:52.117 { 00:12:52.117 "method": "sock_impl_set_options", 00:12:52.117 "params": { 00:12:52.117 "impl_name": "uring", 00:12:52.117 "recv_buf_size": 2097152, 00:12:52.117 "send_buf_size": 2097152, 00:12:52.117 "enable_recv_pipe": true, 00:12:52.117 "enable_quickack": false, 00:12:52.117 "enable_placement_id": 0, 00:12:52.117 "enable_zerocopy_send_server": false, 00:12:52.117 "enable_zerocopy_send_client": false, 00:12:52.117 "zerocopy_threshold": 0, 00:12:52.117 "tls_version": 0, 00:12:52.117 "enable_ktls": false 00:12:52.117 } 00:12:52.117 }, 00:12:52.117 { 00:12:52.117 "method": "sock_impl_set_options", 00:12:52.117 "params": { 00:12:52.117 "impl_name": "posix", 00:12:52.117 "recv_buf_size": 2097152, 00:12:52.117 "send_buf_size": 2097152, 00:12:52.117 "enable_recv_pipe": true, 00:12:52.117 "enable_quickack": false, 00:12:52.117 "enable_placement_id": 0, 00:12:52.117 "enable_zerocopy_send_server": true, 00:12:52.117 "enable_zerocopy_send_client": false, 00:12:52.117 "zerocopy_threshold": 0, 00:12:52.117 "tls_version": 0, 00:12:52.117 "enable_ktls": false 00:12:52.117 } 00:12:52.117 }, 00:12:52.117 { 00:12:52.117 "method": "sock_impl_set_options", 00:12:52.117 "params": { 00:12:52.117 "impl_name": "ssl", 00:12:52.117 "recv_buf_size": 4096, 00:12:52.117 "send_buf_size": 4096, 00:12:52.117 "enable_recv_pipe": true, 00:12:52.117 "enable_quickack": false, 00:12:52.117 "enable_placement_id": 0, 00:12:52.117 "enable_zerocopy_send_server": true, 00:12:52.117 "enable_zerocopy_send_client": false, 00:12:52.117 "zerocopy_threshold": 0, 00:12:52.117 "tls_version": 0, 00:12:52.117 "enable_ktls": false 00:12:52.117 } 00:12:52.117 } 00:12:52.117 ] 00:12:52.117 }, 00:12:52.117 { 00:12:52.118 "subsystem": "vmd", 00:12:52.118 "config": [] 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "subsystem": "accel", 00:12:52.118 "config": [ 00:12:52.118 { 00:12:52.118 "method": "accel_set_options", 00:12:52.118 "params": { 00:12:52.118 "small_cache_size": 128, 00:12:52.118 "large_cache_size": 16, 00:12:52.118 "task_count": 2048, 00:12:52.118 "sequence_count": 2048, 00:12:52.118 "buf_count": 2048 00:12:52.118 } 00:12:52.118 } 00:12:52.118 ] 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "subsystem": "bdev", 00:12:52.118 "config": [ 00:12:52.118 { 00:12:52.118 "method": "bdev_set_options", 00:12:52.118 "params": { 00:12:52.118 "bdev_io_pool_size": 65535, 00:12:52.118 "bdev_io_cache_size": 256, 00:12:52.118 "bdev_auto_examine": true, 00:12:52.118 "iobuf_small_cache_size": 128, 00:12:52.118 "iobuf_large_cache_size": 16 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_raid_set_options", 00:12:52.118 "params": { 00:12:52.118 "process_window_size_kb": 1024 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_iscsi_set_options", 00:12:52.118 "params": { 00:12:52.118 "timeout_sec": 30 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_nvme_set_options", 00:12:52.118 "params": { 00:12:52.118 "action_on_timeout": "none", 00:12:52.118 "timeout_us": 0, 00:12:52.118 "timeout_admin_us": 0, 00:12:52.118 "keep_alive_timeout_ms": 10000, 00:12:52.118 "transport_retry_count": 4, 00:12:52.118 "arbitration_burst": 0, 00:12:52.118 "low_priority_weight": 0, 00:12:52.118 "medium_priority_weight": 0, 00:12:52.118 "high_priority_weight": 0, 00:12:52.118 "nvme_adminq_poll_period_us": 10000, 00:12:52.118 "nvme_ioq_poll_period_us": 0, 00:12:52.118 "io_queue_requests": 0, 00:12:52.118 "delay_cmd_submit": true, 00:12:52.118 "bdev_retry_count": 3, 00:12:52.118 "transport_ack_timeout": 0, 00:12:52.118 "ctrlr_loss_timeout_sec": 0, 00:12:52.118 "reconnect_delay_sec": 0, 00:12:52.118 "fast_io_fail_timeout_sec": 0, 00:12:52.118 "generate_uuids": false, 00:12:52.118 "transport_tos": 0, 00:12:52.118 "io_path_stat": false, 00:12:52.118 "allow_accel_sequence": false 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_nvme_set_hotplug", 00:12:52.118 "params": { 00:12:52.118 "period_us": 100000, 00:12:52.118 "enable": false 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_malloc_create", 00:12:52.118 "params": { 00:12:52.118 "name": "malloc0", 00:12:52.118 "num_blocks": 8192, 00:12:52.118 "block_size": 4096, 00:12:52.118 "physical_block_size": 4096, 00:12:52.118 "uuid": "d3e66e4a-0001-4f6d-85d2-b3350bdbbc39", 00:12:52.118 "optimal_io_boundary": 0 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "bdev_wait_for_examine" 00:12:52.118 } 00:12:52.118 ] 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "subsystem": "nbd", 00:12:52.118 "config": [] 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "subsystem": "scheduler", 00:12:52.118 "config": [ 00:12:52.118 { 00:12:52.118 "method": "framework_set_scheduler", 00:12:52.118 "params": { 00:12:52.118 "name": "static" 00:12:52.118 } 00:12:52.118 } 00:12:52.118 ] 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "subsystem": "nvmf", 00:12:52.118 "config": [ 00:12:52.118 { 00:12:52.118 "method": "nvmf_set_config", 00:12:52.118 "params": { 00:12:52.118 "discovery_filter": "match_any", 00:12:52.118 "admin_cmd_passthru": { 00:12:52.118 "identify_ctrlr": false 00:12:52.118 } 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_set_max_subsystems", 00:12:52.118 "params": { 00:12:52.118 "max_subsystems": 1024 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_set_crdt", 00:12:52.118 "params": { 00:12:52.118 "crdt1": 0, 00:12:52.118 "crdt2": 0, 00:12:52.118 "crdt3": 0 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_create_transport", 00:12:52.118 "params": { 00:12:52.118 "trtype": "TCP", 00:12:52.118 "max_queue_depth": 128, 00:12:52.118 "max_io_qpairs_per_ctrlr": 127, 00:12:52.118 "in_capsule_data_size": 4096, 00:12:52.118 "max_io_size": 131072, 00:12:52.118 "io_unit_size": 131072, 00:12:52.118 "max_aq_depth": 128, 00:12:52.118 "num_shared_buffers": 511, 00:12:52.118 "buf_cache_size": 4294967295, 00:12:52.118 "dif_insert_or_strip": false, 00:12:52.118 "zcopy": false, 00:12:52.118 "c2h_success": false, 00:12:52.118 "sock_priority": 0, 00:12:52.118 "abort_timeout_sec": 1 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_create_subsystem", 00:12:52.118 "params": { 00:12:52.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.118 "allow_any_host": false, 00:12:52.118 "serial_number": "SPDK00000000000001", 00:12:52.118 "model_number": "SPDK bdev Controller", 00:12:52.118 "max_namespaces": 10, 00:12:52.118 "min_cntlid": 1, 00:12:52.118 "max_cntlid": 65519, 00:12:52.118 "ana_reporting": false 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_subsystem_add_host", 00:12:52.118 "params": { 00:12:52.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.118 "host": "nqn.2016-06.io.spdk:host1", 00:12:52.118 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_subsystem_add_ns", 00:12:52.118 "params": { 00:12:52.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.118 "namespace": { 00:12:52.118 "nsid": 1, 00:12:52.118 "bdev_name": "malloc0", 00:12:52.118 "nguid": "D3E66E4A00014F6D85D2B3350BDBBC39", 00:12:52.118 "uuid": "d3e66e4a-0001-4f6d-85d2-b3350bdbbc39" 00:12:52.118 } 00:12:52.118 } 00:12:52.118 }, 00:12:52.118 { 00:12:52.118 "method": "nvmf_subsystem_add_listener", 00:12:52.118 "params": { 00:12:52.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.118 "listen_address": { 00:12:52.118 "trtype": "TCP", 00:12:52.118 "adrfam": "IPv4", 00:12:52.118 "traddr": "10.0.0.2", 00:12:52.118 "trsvcid": "4420" 00:12:52.118 }, 00:12:52.118 "secure_channel": true 00:12:52.118 } 00:12:52.118 } 00:12:52.118 ] 00:12:52.118 } 00:12:52.118 ] 00:12:52.118 }' 00:12:52.118 04:16:04 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:52.378 04:16:04 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:52.378 "subsystems": [ 00:12:52.378 { 00:12:52.378 "subsystem": "iobuf", 00:12:52.378 "config": [ 00:12:52.378 { 00:12:52.378 "method": "iobuf_set_options", 00:12:52.378 "params": { 00:12:52.378 "small_pool_count": 8192, 00:12:52.378 "large_pool_count": 1024, 00:12:52.378 "small_bufsize": 8192, 00:12:52.378 "large_bufsize": 135168 00:12:52.378 } 00:12:52.378 } 00:12:52.378 ] 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "subsystem": "sock", 00:12:52.378 "config": [ 00:12:52.378 { 00:12:52.378 "method": "sock_impl_set_options", 00:12:52.378 "params": { 00:12:52.378 "impl_name": "uring", 00:12:52.378 "recv_buf_size": 2097152, 00:12:52.378 "send_buf_size": 2097152, 00:12:52.378 "enable_recv_pipe": true, 00:12:52.378 "enable_quickack": false, 00:12:52.378 "enable_placement_id": 0, 00:12:52.378 "enable_zerocopy_send_server": false, 00:12:52.378 "enable_zerocopy_send_client": false, 00:12:52.378 "zerocopy_threshold": 0, 00:12:52.378 "tls_version": 0, 00:12:52.378 "enable_ktls": false 00:12:52.378 } 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "method": "sock_impl_set_options", 00:12:52.378 "params": { 00:12:52.378 "impl_name": "posix", 00:12:52.378 "recv_buf_size": 2097152, 00:12:52.378 "send_buf_size": 2097152, 00:12:52.378 "enable_recv_pipe": true, 00:12:52.378 "enable_quickack": false, 00:12:52.378 "enable_placement_id": 0, 00:12:52.378 "enable_zerocopy_send_server": true, 00:12:52.378 "enable_zerocopy_send_client": false, 00:12:52.378 "zerocopy_threshold": 0, 00:12:52.378 "tls_version": 0, 00:12:52.378 "enable_ktls": false 00:12:52.378 } 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "method": "sock_impl_set_options", 00:12:52.378 "params": { 00:12:52.378 "impl_name": "ssl", 00:12:52.378 "recv_buf_size": 4096, 00:12:52.378 "send_buf_size": 4096, 00:12:52.378 "enable_recv_pipe": true, 00:12:52.378 "enable_quickack": false, 00:12:52.378 "enable_placement_id": 0, 00:12:52.378 "enable_zerocopy_send_server": true, 00:12:52.378 "enable_zerocopy_send_client": false, 00:12:52.378 "zerocopy_threshold": 0, 00:12:52.378 "tls_version": 0, 00:12:52.378 "enable_ktls": false 00:12:52.378 } 00:12:52.378 } 00:12:52.378 ] 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "subsystem": "vmd", 00:12:52.378 "config": [] 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "subsystem": "accel", 00:12:52.378 "config": [ 00:12:52.378 { 00:12:52.378 "method": "accel_set_options", 00:12:52.378 "params": { 00:12:52.378 "small_cache_size": 128, 00:12:52.378 "large_cache_size": 16, 00:12:52.378 "task_count": 2048, 00:12:52.378 "sequence_count": 2048, 00:12:52.378 "buf_count": 2048 00:12:52.378 } 00:12:52.378 } 00:12:52.378 ] 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "subsystem": "bdev", 00:12:52.378 "config": [ 00:12:52.378 { 00:12:52.378 "method": "bdev_set_options", 00:12:52.378 "params": { 00:12:52.378 "bdev_io_pool_size": 65535, 00:12:52.378 "bdev_io_cache_size": 256, 00:12:52.378 "bdev_auto_examine": true, 00:12:52.378 "iobuf_small_cache_size": 128, 00:12:52.378 "iobuf_large_cache_size": 16 00:12:52.378 } 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "method": "bdev_raid_set_options", 00:12:52.378 "params": { 00:12:52.378 "process_window_size_kb": 1024 00:12:52.378 } 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "method": "bdev_iscsi_set_options", 00:12:52.378 "params": { 00:12:52.378 "timeout_sec": 30 00:12:52.378 } 00:12:52.378 }, 00:12:52.378 { 00:12:52.378 "method": "bdev_nvme_set_options", 00:12:52.378 "params": { 00:12:52.378 "action_on_timeout": "none", 00:12:52.378 "timeout_us": 0, 00:12:52.378 "timeout_admin_us": 0, 00:12:52.378 "keep_alive_timeout_ms": 10000, 00:12:52.378 "transport_retry_count": 4, 00:12:52.378 "arbitration_burst": 0, 00:12:52.378 "low_priority_weight": 0, 00:12:52.378 "medium_priority_weight": 0, 00:12:52.378 "high_priority_weight": 0, 00:12:52.378 "nvme_adminq_poll_period_us": 10000, 00:12:52.379 "nvme_ioq_poll_period_us": 0, 00:12:52.379 "io_queue_requests": 512, 00:12:52.379 "delay_cmd_submit": true, 00:12:52.379 "bdev_retry_count": 3, 00:12:52.379 "transport_ack_timeout": 0, 00:12:52.379 "ctrlr_loss_timeout_sec": 0, 00:12:52.379 "reconnect_delay_sec": 0, 00:12:52.379 "fast_io_fail_timeout_sec": 0, 00:12:52.379 "generate_uuids": false, 00:12:52.379 "transport_tos": 0, 00:12:52.379 "io_path_stat": false, 00:12:52.379 "allow_accel_sequence": false 00:12:52.379 } 00:12:52.379 }, 00:12:52.379 { 00:12:52.379 "method": "bdev_nvme_attach_controller", 00:12:52.379 "params": { 00:12:52.379 "name": "TLSTEST", 00:12:52.379 "trtype": "TCP", 00:12:52.379 "adrfam": "IPv4", 00:12:52.379 "traddr": "10.0.0.2", 00:12:52.379 "trsvcid": "4420", 00:12:52.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.379 "prchk_reftag": false, 00:12:52.379 "prchk_guard": false, 00:12:52.379 "ctrlr_loss_timeout_sec": 0, 00:12:52.379 "reconnect_delay_sec": 0, 00:12:52.379 "fast_io_fail_timeout_sec": 0, 00:12:52.379 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:52.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:52.379 "hdgst": false, 00:12:52.379 "ddgst": false 00:12:52.379 } 00:12:52.379 }, 00:12:52.379 { 00:12:52.379 "method": "bdev_nvme_set_hotplug", 00:12:52.379 "params": { 00:12:52.379 "period_us": 100000, 00:12:52.379 "enable": false 00:12:52.379 } 00:12:52.379 }, 00:12:52.379 { 00:12:52.379 "method": "bdev_wait_for_examine" 00:12:52.379 } 00:12:52.379 ] 00:12:52.379 }, 00:12:52.379 { 00:12:52.379 "subsystem": "nbd", 00:12:52.379 "config": [] 00:12:52.379 } 00:12:52.379 ] 00:12:52.379 }' 00:12:52.379 04:16:04 -- target/tls.sh@208 -- # killprocess 77782 00:12:52.379 04:16:04 -- common/autotest_common.sh@936 -- # '[' -z 77782 ']' 00:12:52.379 04:16:04 -- common/autotest_common.sh@940 -- # kill -0 77782 00:12:52.379 04:16:04 -- common/autotest_common.sh@941 -- # uname 00:12:52.379 04:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.379 04:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77782 00:12:52.379 04:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:52.379 04:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:52.379 killing process with pid 77782 00:12:52.379 04:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77782' 00:12:52.379 04:16:04 -- common/autotest_common.sh@955 -- # kill 77782 00:12:52.379 Received shutdown signal, test time was about 10.000000 seconds 00:12:52.379 00:12:52.379 Latency(us) 00:12:52.379 [2024-12-06T04:16:04.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.379 [2024-12-06T04:16:04.944Z] =================================================================================================================== 00:12:52.379 [2024-12-06T04:16:04.944Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:52.379 04:16:04 -- common/autotest_common.sh@960 -- # wait 77782 00:12:52.639 04:16:04 -- target/tls.sh@209 -- # killprocess 77727 00:12:52.639 04:16:04 -- common/autotest_common.sh@936 -- # '[' -z 77727 ']' 00:12:52.639 04:16:04 -- common/autotest_common.sh@940 -- # kill -0 77727 00:12:52.639 04:16:04 -- common/autotest_common.sh@941 -- # uname 00:12:52.639 04:16:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.639 04:16:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77727 00:12:52.639 04:16:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:52.639 04:16:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:52.639 killing process with pid 77727 00:12:52.639 04:16:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77727' 00:12:52.639 04:16:04 -- common/autotest_common.sh@955 -- # kill 77727 00:12:52.639 04:16:04 -- common/autotest_common.sh@960 -- # wait 77727 00:12:52.639 04:16:05 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:52.639 04:16:05 -- target/tls.sh@212 -- # echo '{ 00:12:52.639 "subsystems": [ 00:12:52.639 { 00:12:52.639 "subsystem": "iobuf", 00:12:52.639 "config": [ 00:12:52.639 { 00:12:52.639 "method": "iobuf_set_options", 00:12:52.639 "params": { 00:12:52.639 "small_pool_count": 8192, 00:12:52.639 "large_pool_count": 1024, 00:12:52.639 "small_bufsize": 8192, 00:12:52.639 "large_bufsize": 135168 00:12:52.639 } 00:12:52.639 } 00:12:52.639 ] 00:12:52.639 }, 00:12:52.639 { 00:12:52.639 "subsystem": "sock", 00:12:52.639 "config": [ 00:12:52.639 { 00:12:52.639 "method": "sock_impl_set_options", 00:12:52.639 "params": { 00:12:52.639 "impl_name": "uring", 00:12:52.639 "recv_buf_size": 2097152, 00:12:52.639 "send_buf_size": 2097152, 00:12:52.639 "enable_recv_pipe": true, 00:12:52.639 "enable_quickack": false, 00:12:52.639 "enable_placement_id": 0, 00:12:52.639 "enable_zerocopy_send_server": false, 00:12:52.639 "enable_zerocopy_send_client": false, 00:12:52.639 "zerocopy_threshold": 0, 00:12:52.639 "tls_version": 0, 00:12:52.639 "enable_ktls": false 00:12:52.639 } 00:12:52.639 }, 00:12:52.639 { 00:12:52.639 "method": "sock_impl_set_options", 00:12:52.639 "params": { 00:12:52.639 "impl_name": "posix", 00:12:52.639 "recv_buf_size": 2097152, 00:12:52.639 "send_buf_size": 2097152, 00:12:52.639 "enable_recv_pipe": true, 00:12:52.639 "enable_quickack": false, 00:12:52.639 "enable_placement_id": 0, 00:12:52.639 "enable_zerocopy_send_server": true, 00:12:52.639 "enable_zerocopy_send_client": false, 00:12:52.639 "zerocopy_threshold": 0, 00:12:52.639 "tls_version": 0, 00:12:52.639 "enable_ktls": false 00:12:52.639 } 00:12:52.639 }, 00:12:52.639 { 00:12:52.639 "method": "sock_impl_set_options", 00:12:52.639 "params": { 00:12:52.639 "impl_name": "ssl", 00:12:52.639 "recv_buf_size": 4096, 00:12:52.639 "send_buf_size": 4096, 00:12:52.639 "enable_recv_pipe": true, 00:12:52.639 "enable_quickack": false, 00:12:52.639 "enable_placement_id": 0, 00:12:52.639 "enable_zerocopy_send_server": true, 00:12:52.639 "enable_zerocopy_send_client": false, 00:12:52.639 "zerocopy_threshold": 0, 00:12:52.639 "tls_version": 0, 00:12:52.639 "enable_ktls": false 00:12:52.639 } 00:12:52.639 } 00:12:52.639 ] 00:12:52.639 }, 00:12:52.639 { 00:12:52.639 "subsystem": "vmd", 00:12:52.639 "config": [] 00:12:52.639 }, 00:12:52.639 { 00:12:52.639 "subsystem": "accel", 00:12:52.639 "config": [ 00:12:52.639 { 00:12:52.639 "method": "accel_set_options", 00:12:52.639 "params": { 00:12:52.639 "small_cache_size": 128, 00:12:52.640 "large_cache_size": 16, 00:12:52.640 "task_count": 2048, 00:12:52.640 "sequence_count": 2048, 00:12:52.640 "buf_count": 2048 00:12:52.640 } 00:12:52.640 } 00:12:52.640 ] 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "subsystem": "bdev", 00:12:52.640 "config": [ 00:12:52.640 { 00:12:52.640 "method": "bdev_set_options", 00:12:52.640 "params": { 00:12:52.640 "bdev_io_pool_size": 65535, 00:12:52.640 "bdev_io_cache_size": 256, 00:12:52.640 "bdev_auto_examine": true, 00:12:52.640 "iobuf_small_cache_size": 128, 00:12:52.640 "iobuf_large_cache_size": 16 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_raid_set_options", 00:12:52.640 "params": { 00:12:52.640 "process_window_size_kb": 1024 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_iscsi_set_options", 00:12:52.640 "params": { 00:12:52.640 "timeout_sec": 30 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_nvme_set_options", 00:12:52.640 "params": { 00:12:52.640 "action_on_timeout": "none", 00:12:52.640 "timeout_us": 0, 00:12:52.640 "timeout_admin_us": 0, 00:12:52.640 "keep_alive_timeout_ms": 10000, 00:12:52.640 "transport_retry_count": 4, 00:12:52.640 "arbitration_burst": 0, 00:12:52.640 "low_priority_weight": 0, 00:12:52.640 "medium_priority_weight": 0, 00:12:52.640 "high_priority_weight": 0, 00:12:52.640 "nvme_adminq_poll_period_us": 10000, 00:12:52.640 "nvme_ioq_poll_period_us": 0, 00:12:52.640 "io_queue_requests": 0, 00:12:52.640 "delay_cmd_submit": true, 00:12:52.640 "bdev_retry_count": 3, 00:12:52.640 "transport_ack_timeout": 0, 00:12:52.640 "ctrlr_loss_timeout_sec": 0, 00:12:52.640 "reconnect_delay_sec": 0, 00:12:52.640 "fast_io_fail_timeout_sec": 0, 00:12:52.640 "generate_uuids": false, 00:12:52.640 "transport_tos": 0, 00:12:52.640 "io_path_stat": false, 00:12:52.640 "allow_accel_sequence": false 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_nvme_set_hotplug", 00:12:52.640 "params": { 00:12:52.640 "period_us": 100000, 00:12:52.640 "enable": false 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_malloc_create", 00:12:52.640 "params": { 00:12:52.640 "name": "malloc0", 00:12:52.640 "num_blocks": 8192, 00:12:52.640 "block_size": 4096, 00:12:52.640 "physical_block_size": 4096, 00:12:52.640 "uuid": "d3e66e4a-0001-4f6d-85d2-b3350bdbbc39", 00:12:52.640 "optimal_io_boundary": 0 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "bdev_wait_for_examine" 00:12:52.640 } 00:12:52.640 ] 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "subsystem": "nbd", 00:12:52.640 "config": [] 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "subsystem": "scheduler", 00:12:52.640 "config": [ 00:12:52.640 { 00:12:52.640 "method": "framework_set_scheduler", 00:12:52.640 "params": { 00:12:52.640 "name": "static" 00:12:52.640 } 00:12:52.640 } 00:12:52.640 ] 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "subsystem": "nvmf", 00:12:52.640 "config": [ 00:12:52.640 { 00:12:52.640 "method": "nvmf_set_config", 00:12:52.640 "params": { 00:12:52.640 "discovery_filter": "match_any", 00:12:52.640 "admin_cmd_passthru": { 00:12:52.640 "identify_ctrlr": false 00:12:52.640 } 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_set_max_subsystems", 00:12:52.640 "params": { 00:12:52.640 "max_subsystems": 1024 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_set_crdt", 00:12:52.640 "params": { 00:12:52.640 "crdt1": 0, 00:12:52.640 "crdt2": 0, 00:12:52.640 "crdt3": 0 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_create_transport", 00:12:52.640 "params": { 00:12:52.640 "trtype": "TCP", 00:12:52.640 "max_queue_depth": 128, 00:12:52.640 "max_io_qpairs_per_ctrlr": 127, 00:12:52.640 "in_capsule_data_size": 4096, 00:12:52.640 "max_io_size": 131072, 00:12:52.640 "io_unit_size": 131072, 00:12:52.640 "max_aq_depth": 128, 00:12:52.640 "num_shared_buffers": 511, 00:12:52.640 "buf_cache_size": 4294967295, 00:12:52.640 "dif_insert_or_strip": false, 00:12:52.640 "zcopy": false, 00:12:52.640 "c2h_success": false, 00:12:52.640 "sock_priority": 0, 00:12:52.640 "abort_timeout_sec": 1 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_create_subsystem", 00:12:52.640 "params": { 00:12:52.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.640 "allow_any_host": false, 00:12:52.640 "serial_number": "SPDK00000000000001", 00:12:52.640 "model_number": "SPDK bdev Controller", 00:12:52.640 "max_namespaces": 10, 00:12:52.640 "min_cntlid": 1, 00:12:52.640 "max_cntlid": 65519, 00:12:52.640 "ana_reporting": false 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_subsystem_add_host", 00:12:52.640 "params": { 00:12:52.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.640 "host": "nqn.2016-06.io.spdk:host1", 00:12:52.640 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_subsystem_add_ns", 00:12:52.640 "params": { 00:12:52.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.640 "namespace": { 00:12:52.640 "nsid": 1, 00:12:52.640 "bdev_name": "malloc0", 00:12:52.640 "nguid": "D3E66E4A00014F6D85D2B3350BDBBC39", 00:12:52.640 "uuid": "d3e66e4a-0001-4f6d-85d2-b3350bdbbc39" 00:12:52.640 } 00:12:52.640 } 00:12:52.640 }, 00:12:52.640 { 00:12:52.640 "method": "nvmf_subsystem_add_listener", 00:12:52.640 "params": { 00:12:52.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.640 "listen_address": { 00:12:52.640 "trtype": "TCP", 00:12:52.640 "adrfam": "IPv4", 00:12:52.640 "traddr": "10.0.0.2", 00:12:52.640 "trsvcid": "4420" 00:12:52.640 }, 00:12:52.640 "secure_channel": true 00:12:52.640 } 00:12:52.640 } 00:12:52.640 ] 00:12:52.640 } 00:12:52.640 ] 00:12:52.640 }' 00:12:52.640 04:16:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.640 04:16:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.640 04:16:05 -- common/autotest_common.sh@10 -- # set +x 00:12:52.899 04:16:05 -- nvmf/common.sh@469 -- # nvmfpid=77825 00:12:52.899 04:16:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:52.899 04:16:05 -- nvmf/common.sh@470 -- # waitforlisten 77825 00:12:52.899 04:16:05 -- common/autotest_common.sh@829 -- # '[' -z 77825 ']' 00:12:52.899 04:16:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.899 04:16:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.899 04:16:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.899 04:16:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.899 04:16:05 -- common/autotest_common.sh@10 -- # set +x 00:12:52.899 [2024-12-06 04:16:05.258574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.899 [2024-12-06 04:16:05.258703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.899 [2024-12-06 04:16:05.399503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.156 [2024-12-06 04:16:05.485318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.157 [2024-12-06 04:16:05.485501] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.157 [2024-12-06 04:16:05.485514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.157 [2024-12-06 04:16:05.485523] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.157 [2024-12-06 04:16:05.485553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.157 [2024-12-06 04:16:05.708574] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.414 [2024-12-06 04:16:05.740555] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:53.414 [2024-12-06 04:16:05.740752] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.673 04:16:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.673 04:16:06 -- common/autotest_common.sh@862 -- # return 0 00:12:53.673 04:16:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.673 04:16:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.673 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:12:53.673 04:16:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.673 04:16:06 -- target/tls.sh@216 -- # bdevperf_pid=77857 00:12:53.673 04:16:06 -- target/tls.sh@217 -- # waitforlisten 77857 /var/tmp/bdevperf.sock 00:12:53.673 04:16:06 -- common/autotest_common.sh@829 -- # '[' -z 77857 ']' 00:12:53.673 04:16:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:53.673 04:16:06 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:53.673 04:16:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.673 04:16:06 -- target/tls.sh@213 -- # echo '{ 00:12:53.673 "subsystems": [ 00:12:53.673 { 00:12:53.673 "subsystem": "iobuf", 00:12:53.673 "config": [ 00:12:53.673 { 00:12:53.673 "method": "iobuf_set_options", 00:12:53.673 "params": { 00:12:53.673 "small_pool_count": 8192, 00:12:53.673 "large_pool_count": 1024, 00:12:53.673 "small_bufsize": 8192, 00:12:53.673 "large_bufsize": 135168 00:12:53.673 } 00:12:53.673 } 00:12:53.673 ] 00:12:53.673 }, 00:12:53.673 { 00:12:53.673 "subsystem": "sock", 00:12:53.673 "config": [ 00:12:53.673 { 00:12:53.673 "method": "sock_impl_set_options", 00:12:53.673 "params": { 00:12:53.673 "impl_name": "uring", 00:12:53.673 "recv_buf_size": 2097152, 00:12:53.673 "send_buf_size": 2097152, 00:12:53.673 "enable_recv_pipe": true, 00:12:53.673 "enable_quickack": false, 00:12:53.673 "enable_placement_id": 0, 00:12:53.673 "enable_zerocopy_send_server": false, 00:12:53.673 "enable_zerocopy_send_client": false, 00:12:53.673 "zerocopy_threshold": 0, 00:12:53.673 "tls_version": 0, 00:12:53.673 "enable_ktls": false 00:12:53.673 } 00:12:53.673 }, 00:12:53.673 { 00:12:53.673 "method": "sock_impl_set_options", 00:12:53.673 "params": { 00:12:53.673 "impl_name": "posix", 00:12:53.674 "recv_buf_size": 2097152, 00:12:53.674 "send_buf_size": 2097152, 00:12:53.674 "enable_recv_pipe": true, 00:12:53.674 "enable_quickack": false, 00:12:53.674 "enable_placement_id": 0, 00:12:53.674 "enable_zerocopy_send_server": true, 00:12:53.674 "enable_zerocopy_send_client": false, 00:12:53.674 "zerocopy_threshold": 0, 00:12:53.674 "tls_version": 0, 00:12:53.674 "enable_ktls": false 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "sock_impl_set_options", 00:12:53.674 "params": { 00:12:53.674 "impl_name": "ssl", 00:12:53.674 "recv_buf_size": 4096, 00:12:53.674 "send_buf_size": 4096, 00:12:53.674 "enable_recv_pipe": true, 00:12:53.674 "enable_quickack": false, 00:12:53.674 "enable_placement_id": 0, 00:12:53.674 "enable_zerocopy_send_server": true, 00:12:53.674 "enable_zerocopy_send_client": false, 00:12:53.674 "zerocopy_threshold": 0, 00:12:53.674 "tls_version": 0, 00:12:53.674 "enable_ktls": false 00:12:53.674 } 00:12:53.674 } 00:12:53.674 ] 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "subsystem": "vmd", 00:12:53.674 "config": [] 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "subsystem": "accel", 00:12:53.674 "config": [ 00:12:53.674 { 00:12:53.674 "method": "accel_set_options", 00:12:53.674 "params": { 00:12:53.674 "small_cache_size": 128, 00:12:53.674 "large_cache_size": 16, 00:12:53.674 "task_count": 2048, 00:12:53.674 "sequence_count": 2048, 00:12:53.674 "buf_count": 2048 00:12:53.674 } 00:12:53.674 } 00:12:53.674 ] 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "subsystem": "bdev", 00:12:53.674 "config": [ 00:12:53.674 { 00:12:53.674 "method": "bdev_set_options", 00:12:53.674 "params": { 00:12:53.674 "bdev_io_pool_size": 65535, 00:12:53.674 "bdev_io_cache_size": 256, 00:12:53.674 "bdev_auto_examine": true, 00:12:53.674 "iobuf_small_cache_size": 128, 00:12:53.674 "iobuf_large_cache_size": 16 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_raid_set_options", 00:12:53.674 "params": { 00:12:53.674 "process_window_size_kb": 1024 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_iscsi_set_options", 00:12:53.674 "params": { 00:12:53.674 "timeout_sec": 30 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_nvme_set_options", 00:12:53.674 "params": { 00:12:53.674 "action_on_timeout": "none", 00:12:53.674 "timeout_us": 0, 00:12:53.674 "timeout_admin_us": 0, 00:12:53.674 "keep_alive_timeout_ms": 10000, 00:12:53.674 "transport_retry_count": 4, 00:12:53.674 "arbitration_burst": 0, 00:12:53.674 "low_priority_weight": 0, 00:12:53.674 "medium_priority_weight": 0, 00:12:53.674 "high_priority_weight": 0, 00:12:53.674 "nvme_adminq_poll_period_us": 10000, 00:12:53.674 "nvme_ioq_poll_period_us": 0, 00:12:53.674 "io_queue_requests": 512, 00:12:53.674 "delay_cmd_submit": true, 00:12:53.674 "bdev_retry_count": 3, 00:12:53.674 "transport_ack_timeout": 0, 00:12:53.674 "ctrlr_loss_timeout_sec": 0, 00:12:53.674 "reconnect_delay_sec": 0, 00:12:53.674 "fast_io_fail_timeout_sec": 0, 00:12:53.674 "generate_uuids": false, 00:12:53.674 "transport_tos": 0, 00:12:53.674 "io_path_stat": false, 00:12:53.674 "allow_accel_sequence": false 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_nvme_attach_controller", 00:12:53.674 "params": { 00:12:53.674 "name": "TLSTEST", 00:12:53.674 "trtype": "TCP", 00:12:53.674 "adrfam": "IPv4", 00:12:53.674 "traddr": "10.0.0.2", 00:12:53.674 "trsvcid": "4420", 00:12:53.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.674 "prchk_reftag": false, 00:12:53.674 "prchk_guard": false, 00:12:53.674 "ctrlr_loss_timeout_sec": 0, 00:12:53.674 "reconnect_delay_sec": 0, 00:12:53.674 "fast_io_fail_timeout_sec": 0, 00:12:53.674 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:53.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.674 "hdgst": false, 00:12:53.674 "ddgst": false 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_nvme_set_hotplug", 00:12:53.674 "params": { 00:12:53.674 "period_us": 100000, 00:12:53.674 "enable": false 00:12:53.674 } 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "method": "bdev_wait_for_examine" 00:12:53.674 } 00:12:53.674 ] 00:12:53.674 }, 00:12:53.674 { 00:12:53.674 "subsystem": "nbd", 00:12:53.674 "config": [] 00:12:53.674 } 00:12:53.674 ] 00:12:53.674 }' 00:12:53.674 04:16:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:53.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:53.674 04:16:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.674 04:16:06 -- common/autotest_common.sh@10 -- # set +x 00:12:53.932 [2024-12-06 04:16:06.235766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:53.932 [2024-12-06 04:16:06.235858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77857 ] 00:12:53.932 [2024-12-06 04:16:06.368194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.932 [2024-12-06 04:16:06.452315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.191 [2024-12-06 04:16:06.611010] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:54.759 04:16:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.759 04:16:07 -- common/autotest_common.sh@862 -- # return 0 00:12:54.759 04:16:07 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:55.019 Running I/O for 10 seconds... 00:13:05.022 00:13:05.022 Latency(us) 00:13:05.022 [2024-12-06T04:16:17.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.022 [2024-12-06T04:16:17.587Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:05.023 Verification LBA range: start 0x0 length 0x2000 00:13:05.023 TLSTESTn1 : 10.01 6050.25 23.63 0.00 0.00 21127.61 1861.82 19303.33 00:13:05.023 [2024-12-06T04:16:17.588Z] =================================================================================================================== 00:13:05.023 [2024-12-06T04:16:17.588Z] Total : 6050.25 23.63 0.00 0.00 21127.61 1861.82 19303.33 00:13:05.023 0 00:13:05.023 04:16:17 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:05.023 04:16:17 -- target/tls.sh@223 -- # killprocess 77857 00:13:05.023 04:16:17 -- common/autotest_common.sh@936 -- # '[' -z 77857 ']' 00:13:05.023 04:16:17 -- common/autotest_common.sh@940 -- # kill -0 77857 00:13:05.023 04:16:17 -- common/autotest_common.sh@941 -- # uname 00:13:05.023 04:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.023 04:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77857 00:13:05.023 killing process with pid 77857 00:13:05.023 Received shutdown signal, test time was about 10.000000 seconds 00:13:05.023 00:13:05.023 Latency(us) 00:13:05.023 [2024-12-06T04:16:17.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.023 [2024-12-06T04:16:17.588Z] =================================================================================================================== 00:13:05.023 [2024-12-06T04:16:17.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:05.023 04:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:05.023 04:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:05.023 04:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77857' 00:13:05.023 04:16:17 -- common/autotest_common.sh@955 -- # kill 77857 00:13:05.023 04:16:17 -- common/autotest_common.sh@960 -- # wait 77857 00:13:05.281 04:16:17 -- target/tls.sh@224 -- # killprocess 77825 00:13:05.281 04:16:17 -- common/autotest_common.sh@936 -- # '[' -z 77825 ']' 00:13:05.281 04:16:17 -- common/autotest_common.sh@940 -- # kill -0 77825 00:13:05.281 04:16:17 -- common/autotest_common.sh@941 -- # uname 00:13:05.281 04:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.281 04:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77825 00:13:05.281 killing process with pid 77825 00:13:05.281 04:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:05.281 04:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:05.281 04:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77825' 00:13:05.281 04:16:17 -- common/autotest_common.sh@955 -- # kill 77825 00:13:05.281 04:16:17 -- common/autotest_common.sh@960 -- # wait 77825 00:13:05.540 04:16:17 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:13:05.540 04:16:17 -- target/tls.sh@227 -- # cleanup 00:13:05.540 04:16:17 -- target/tls.sh@15 -- # process_shm --id 0 00:13:05.540 04:16:17 -- common/autotest_common.sh@806 -- # type=--id 00:13:05.540 04:16:17 -- common/autotest_common.sh@807 -- # id=0 00:13:05.540 04:16:17 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:05.540 04:16:17 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:05.540 04:16:17 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:05.540 04:16:17 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:05.540 04:16:17 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:05.540 04:16:17 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:05.540 nvmf_trace.0 00:13:05.540 04:16:17 -- common/autotest_common.sh@821 -- # return 0 00:13:05.540 04:16:17 -- target/tls.sh@16 -- # killprocess 77857 00:13:05.540 04:16:17 -- common/autotest_common.sh@936 -- # '[' -z 77857 ']' 00:13:05.540 04:16:17 -- common/autotest_common.sh@940 -- # kill -0 77857 00:13:05.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77857) - No such process 00:13:05.540 Process with pid 77857 is not found 00:13:05.540 04:16:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77857 is not found' 00:13:05.540 04:16:17 -- target/tls.sh@17 -- # nvmftestfini 00:13:05.540 04:16:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:05.540 04:16:17 -- nvmf/common.sh@116 -- # sync 00:13:05.540 04:16:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:05.540 04:16:17 -- nvmf/common.sh@119 -- # set +e 00:13:05.540 04:16:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:05.540 04:16:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:05.540 rmmod nvme_tcp 00:13:05.540 rmmod nvme_fabrics 00:13:05.540 rmmod nvme_keyring 00:13:05.540 04:16:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:05.540 04:16:18 -- nvmf/common.sh@123 -- # set -e 00:13:05.540 04:16:18 -- nvmf/common.sh@124 -- # return 0 00:13:05.540 04:16:18 -- nvmf/common.sh@477 -- # '[' -n 77825 ']' 00:13:05.540 04:16:18 -- nvmf/common.sh@478 -- # killprocess 77825 00:13:05.540 04:16:18 -- common/autotest_common.sh@936 -- # '[' -z 77825 ']' 00:13:05.540 04:16:18 -- common/autotest_common.sh@940 -- # kill -0 77825 00:13:05.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77825) - No such process 00:13:05.540 Process with pid 77825 is not found 00:13:05.540 04:16:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77825 is not found' 00:13:05.540 04:16:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:05.540 04:16:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:05.540 04:16:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:05.540 04:16:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.540 04:16:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:05.540 04:16:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.540 04:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.540 04:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.540 04:16:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:05.540 04:16:18 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:05.540 ************************************ 00:13:05.540 END TEST nvmf_tls 00:13:05.540 ************************************ 00:13:05.540 00:13:05.540 real 1m12.789s 00:13:05.540 user 1m53.215s 00:13:05.540 sys 0m24.721s 00:13:05.540 04:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:05.540 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:05.799 04:16:18 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:05.799 04:16:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:05.799 04:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.799 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:05.799 ************************************ 00:13:05.799 START TEST nvmf_fips 00:13:05.799 ************************************ 00:13:05.799 04:16:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:05.799 * Looking for test storage... 00:13:05.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:05.799 04:16:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:05.799 04:16:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:05.799 04:16:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:05.799 04:16:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:05.799 04:16:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:05.799 04:16:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:05.799 04:16:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:05.799 04:16:18 -- scripts/common.sh@335 -- # IFS=.-: 00:13:05.799 04:16:18 -- scripts/common.sh@335 -- # read -ra ver1 00:13:05.799 04:16:18 -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.799 04:16:18 -- scripts/common.sh@336 -- # read -ra ver2 00:13:05.799 04:16:18 -- scripts/common.sh@337 -- # local 'op=<' 00:13:05.799 04:16:18 -- scripts/common.sh@339 -- # ver1_l=2 00:13:05.799 04:16:18 -- scripts/common.sh@340 -- # ver2_l=1 00:13:05.799 04:16:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:05.799 04:16:18 -- scripts/common.sh@343 -- # case "$op" in 00:13:05.799 04:16:18 -- scripts/common.sh@344 -- # : 1 00:13:05.799 04:16:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:05.800 04:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.800 04:16:18 -- scripts/common.sh@364 -- # decimal 1 00:13:05.800 04:16:18 -- scripts/common.sh@352 -- # local d=1 00:13:05.800 04:16:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.800 04:16:18 -- scripts/common.sh@354 -- # echo 1 00:13:05.800 04:16:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:05.800 04:16:18 -- scripts/common.sh@365 -- # decimal 2 00:13:05.800 04:16:18 -- scripts/common.sh@352 -- # local d=2 00:13:05.800 04:16:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.800 04:16:18 -- scripts/common.sh@354 -- # echo 2 00:13:05.800 04:16:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:05.800 04:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:05.800 04:16:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:05.800 04:16:18 -- scripts/common.sh@367 -- # return 0 00:13:05.800 04:16:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.800 04:16:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:05.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.800 --rc genhtml_branch_coverage=1 00:13:05.800 --rc genhtml_function_coverage=1 00:13:05.800 --rc genhtml_legend=1 00:13:05.800 --rc geninfo_all_blocks=1 00:13:05.800 --rc geninfo_unexecuted_blocks=1 00:13:05.800 00:13:05.800 ' 00:13:05.800 04:16:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:05.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.800 --rc genhtml_branch_coverage=1 00:13:05.800 --rc genhtml_function_coverage=1 00:13:05.800 --rc genhtml_legend=1 00:13:05.800 --rc geninfo_all_blocks=1 00:13:05.800 --rc geninfo_unexecuted_blocks=1 00:13:05.800 00:13:05.800 ' 00:13:05.800 04:16:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:05.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.800 --rc genhtml_branch_coverage=1 00:13:05.800 --rc genhtml_function_coverage=1 00:13:05.800 --rc genhtml_legend=1 00:13:05.800 --rc geninfo_all_blocks=1 00:13:05.800 --rc geninfo_unexecuted_blocks=1 00:13:05.800 00:13:05.800 ' 00:13:05.800 04:16:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:05.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.800 --rc genhtml_branch_coverage=1 00:13:05.800 --rc genhtml_function_coverage=1 00:13:05.800 --rc genhtml_legend=1 00:13:05.800 --rc geninfo_all_blocks=1 00:13:05.800 --rc geninfo_unexecuted_blocks=1 00:13:05.800 00:13:05.800 ' 00:13:05.800 04:16:18 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:05.800 04:16:18 -- nvmf/common.sh@7 -- # uname -s 00:13:05.800 04:16:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.800 04:16:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.800 04:16:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.800 04:16:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.800 04:16:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.800 04:16:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.800 04:16:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.800 04:16:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.800 04:16:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.800 04:16:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.800 04:16:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:05.800 04:16:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:05.800 04:16:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.800 04:16:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.800 04:16:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:05.800 04:16:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:05.800 04:16:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.800 04:16:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.800 04:16:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.800 04:16:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.800 04:16:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.800 04:16:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.800 04:16:18 -- paths/export.sh@5 -- # export PATH 00:13:05.800 04:16:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.800 04:16:18 -- nvmf/common.sh@46 -- # : 0 00:13:05.800 04:16:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:05.800 04:16:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:05.800 04:16:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:05.800 04:16:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.800 04:16:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.800 04:16:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:05.800 04:16:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:05.800 04:16:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:05.800 04:16:18 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:05.800 04:16:18 -- fips/fips.sh@89 -- # check_openssl_version 00:13:05.800 04:16:18 -- fips/fips.sh@83 -- # local target=3.0.0 00:13:05.800 04:16:18 -- fips/fips.sh@85 -- # openssl version 00:13:05.800 04:16:18 -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:05.800 04:16:18 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:13:05.800 04:16:18 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:05.800 04:16:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:05.800 04:16:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:05.800 04:16:18 -- scripts/common.sh@335 -- # IFS=.-: 00:13:05.800 04:16:18 -- scripts/common.sh@335 -- # read -ra ver1 00:13:05.800 04:16:18 -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.800 04:16:18 -- scripts/common.sh@336 -- # read -ra ver2 00:13:05.800 04:16:18 -- scripts/common.sh@337 -- # local 'op=>=' 00:13:05.800 04:16:18 -- scripts/common.sh@339 -- # ver1_l=3 00:13:05.800 04:16:18 -- scripts/common.sh@340 -- # ver2_l=3 00:13:05.800 04:16:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:05.800 04:16:18 -- scripts/common.sh@343 -- # case "$op" in 00:13:05.800 04:16:18 -- scripts/common.sh@347 -- # : 1 00:13:05.800 04:16:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:05.800 04:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.800 04:16:18 -- scripts/common.sh@364 -- # decimal 3 00:13:05.800 04:16:18 -- scripts/common.sh@352 -- # local d=3 00:13:05.800 04:16:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:05.800 04:16:18 -- scripts/common.sh@354 -- # echo 3 00:13:05.800 04:16:18 -- scripts/common.sh@364 -- # ver1[v]=3 00:13:05.800 04:16:18 -- scripts/common.sh@365 -- # decimal 3 00:13:06.059 04:16:18 -- scripts/common.sh@352 -- # local d=3 00:13:06.059 04:16:18 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:06.059 04:16:18 -- scripts/common.sh@354 -- # echo 3 00:13:06.059 04:16:18 -- scripts/common.sh@365 -- # ver2[v]=3 00:13:06.059 04:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:06.059 04:16:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:06.059 04:16:18 -- scripts/common.sh@363 -- # (( v++ )) 00:13:06.059 04:16:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.059 04:16:18 -- scripts/common.sh@364 -- # decimal 1 00:13:06.059 04:16:18 -- scripts/common.sh@352 -- # local d=1 00:13:06.059 04:16:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.059 04:16:18 -- scripts/common.sh@354 -- # echo 1 00:13:06.059 04:16:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:06.059 04:16:18 -- scripts/common.sh@365 -- # decimal 0 00:13:06.059 04:16:18 -- scripts/common.sh@352 -- # local d=0 00:13:06.059 04:16:18 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:06.059 04:16:18 -- scripts/common.sh@354 -- # echo 0 00:13:06.059 04:16:18 -- scripts/common.sh@365 -- # ver2[v]=0 00:13:06.059 04:16:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:06.059 04:16:18 -- scripts/common.sh@366 -- # return 0 00:13:06.059 04:16:18 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:06.059 04:16:18 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:06.059 04:16:18 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:06.059 04:16:18 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:06.059 04:16:18 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:06.059 04:16:18 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:06.059 04:16:18 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:06.059 04:16:18 -- fips/fips.sh@113 -- # build_openssl_config 00:13:06.059 04:16:18 -- fips/fips.sh@37 -- # cat 00:13:06.059 04:16:18 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:06.059 04:16:18 -- fips/fips.sh@58 -- # cat - 00:13:06.059 04:16:18 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:06.059 04:16:18 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:06.059 04:16:18 -- fips/fips.sh@116 -- # mapfile -t providers 00:13:06.059 04:16:18 -- fips/fips.sh@116 -- # grep name 00:13:06.059 04:16:18 -- fips/fips.sh@116 -- # openssl list -providers 00:13:06.059 04:16:18 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:06.059 04:16:18 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:06.059 04:16:18 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:06.059 04:16:18 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:06.059 04:16:18 -- fips/fips.sh@127 -- # : 00:13:06.059 04:16:18 -- common/autotest_common.sh@650 -- # local es=0 00:13:06.059 04:16:18 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:06.059 04:16:18 -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:06.059 04:16:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.059 04:16:18 -- common/autotest_common.sh@642 -- # type -t openssl 00:13:06.059 04:16:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.059 04:16:18 -- common/autotest_common.sh@644 -- # type -P openssl 00:13:06.059 04:16:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.059 04:16:18 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:06.059 04:16:18 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:06.059 04:16:18 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:06.059 Error setting digest 00:13:06.059 40D2413D0F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:06.059 40D2413D0F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:06.059 04:16:18 -- common/autotest_common.sh@653 -- # es=1 00:13:06.059 04:16:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.059 04:16:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.059 04:16:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.059 04:16:18 -- fips/fips.sh@130 -- # nvmftestinit 00:13:06.059 04:16:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:06.059 04:16:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.059 04:16:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:06.059 04:16:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:06.059 04:16:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:06.059 04:16:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.059 04:16:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.059 04:16:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.059 04:16:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:06.059 04:16:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:06.059 04:16:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:06.059 04:16:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:06.059 04:16:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:06.059 04:16:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:06.059 04:16:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.059 04:16:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.059 04:16:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:06.059 04:16:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:06.059 04:16:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.059 04:16:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.059 04:16:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.059 04:16:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.059 04:16:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.059 04:16:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.059 04:16:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.059 04:16:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.059 04:16:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:06.059 04:16:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:06.059 Cannot find device "nvmf_tgt_br" 00:13:06.059 04:16:18 -- nvmf/common.sh@154 -- # true 00:13:06.059 04:16:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.059 Cannot find device "nvmf_tgt_br2" 00:13:06.059 04:16:18 -- nvmf/common.sh@155 -- # true 00:13:06.059 04:16:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:06.059 04:16:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:06.059 Cannot find device "nvmf_tgt_br" 00:13:06.059 04:16:18 -- nvmf/common.sh@157 -- # true 00:13:06.059 04:16:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:06.059 Cannot find device "nvmf_tgt_br2" 00:13:06.059 04:16:18 -- nvmf/common.sh@158 -- # true 00:13:06.059 04:16:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:06.059 04:16:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:06.060 04:16:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.060 04:16:18 -- nvmf/common.sh@161 -- # true 00:13:06.060 04:16:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.060 04:16:18 -- nvmf/common.sh@162 -- # true 00:13:06.060 04:16:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.060 04:16:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:06.328 04:16:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:06.328 04:16:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:06.328 04:16:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:06.328 04:16:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:06.328 04:16:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:06.328 04:16:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:06.328 04:16:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:06.328 04:16:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:06.328 04:16:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:06.328 04:16:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:06.328 04:16:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:06.328 04:16:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:06.328 04:16:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:06.328 04:16:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:06.328 04:16:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:06.328 04:16:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:06.328 04:16:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:06.328 04:16:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:06.328 04:16:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.328 04:16:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.328 04:16:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.328 04:16:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:06.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:13:06.328 00:13:06.328 --- 10.0.0.2 ping statistics --- 00:13:06.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.328 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:06.328 04:16:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:06.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:06.328 00:13:06.328 --- 10.0.0.3 ping statistics --- 00:13:06.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.328 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:06.328 04:16:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:13:06.328 00:13:06.328 --- 10.0.0.1 ping statistics --- 00:13:06.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.328 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:13:06.328 04:16:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.328 04:16:18 -- nvmf/common.sh@421 -- # return 0 00:13:06.328 04:16:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:06.328 04:16:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.329 04:16:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:06.329 04:16:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:06.329 04:16:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.329 04:16:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:06.329 04:16:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:06.329 04:16:18 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:06.329 04:16:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:06.329 04:16:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.329 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:06.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.329 04:16:18 -- nvmf/common.sh@469 -- # nvmfpid=78212 00:13:06.329 04:16:18 -- nvmf/common.sh@470 -- # waitforlisten 78212 00:13:06.329 04:16:18 -- common/autotest_common.sh@829 -- # '[' -z 78212 ']' 00:13:06.329 04:16:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:06.329 04:16:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.329 04:16:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.329 04:16:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.329 04:16:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.329 04:16:18 -- common/autotest_common.sh@10 -- # set +x 00:13:06.599 [2024-12-06 04:16:18.905473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:06.599 [2024-12-06 04:16:18.905572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.599 [2024-12-06 04:16:19.047137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.599 [2024-12-06 04:16:19.119749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.599 [2024-12-06 04:16:19.119906] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.599 [2024-12-06 04:16:19.119919] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.599 [2024-12-06 04:16:19.119928] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.599 [2024-12-06 04:16:19.119950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.535 04:16:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.535 04:16:19 -- common/autotest_common.sh@862 -- # return 0 00:13:07.535 04:16:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:07.535 04:16:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:07.535 04:16:19 -- common/autotest_common.sh@10 -- # set +x 00:13:07.535 04:16:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.535 04:16:19 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:07.535 04:16:19 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:07.535 04:16:19 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:07.535 04:16:19 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:07.535 04:16:19 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:07.535 04:16:19 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:07.535 04:16:19 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:07.535 04:16:19 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:07.795 [2024-12-06 04:16:20.218313] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.795 [2024-12-06 04:16:20.234278] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:07.795 [2024-12-06 04:16:20.234527] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.795 malloc0 00:13:07.795 04:16:20 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.795 04:16:20 -- fips/fips.sh@147 -- # bdevperf_pid=78252 00:13:07.795 04:16:20 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:07.795 04:16:20 -- fips/fips.sh@148 -- # waitforlisten 78252 /var/tmp/bdevperf.sock 00:13:07.795 04:16:20 -- common/autotest_common.sh@829 -- # '[' -z 78252 ']' 00:13:07.795 04:16:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:07.795 04:16:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.795 04:16:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:07.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:07.795 04:16:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.795 04:16:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.795 [2024-12-06 04:16:20.353929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:07.795 [2024-12-06 04:16:20.354022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78252 ] 00:13:08.055 [2024-12-06 04:16:20.491620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.055 [2024-12-06 04:16:20.571550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.992 04:16:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.992 04:16:21 -- common/autotest_common.sh@862 -- # return 0 00:13:08.992 04:16:21 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:08.992 [2024-12-06 04:16:21.546171] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.252 TLSTESTn1 00:13:09.252 04:16:21 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:09.252 Running I/O for 10 seconds... 00:13:19.232 00:13:19.232 Latency(us) 00:13:19.232 [2024-12-06T04:16:31.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.232 [2024-12-06T04:16:31.797Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:19.232 Verification LBA range: start 0x0 length 0x2000 00:13:19.232 TLSTESTn1 : 10.02 6336.03 24.75 0.00 0.00 20168.42 5421.61 24307.90 00:13:19.232 [2024-12-06T04:16:31.797Z] =================================================================================================================== 00:13:19.232 [2024-12-06T04:16:31.798Z] Total : 6336.03 24.75 0.00 0.00 20168.42 5421.61 24307.90 00:13:19.233 0 00:13:19.492 04:16:31 -- fips/fips.sh@1 -- # cleanup 00:13:19.492 04:16:31 -- fips/fips.sh@15 -- # process_shm --id 0 00:13:19.492 04:16:31 -- common/autotest_common.sh@806 -- # type=--id 00:13:19.492 04:16:31 -- common/autotest_common.sh@807 -- # id=0 00:13:19.492 04:16:31 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:19.492 04:16:31 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:19.492 04:16:31 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:19.492 04:16:31 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:19.492 04:16:31 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:19.492 04:16:31 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:19.492 nvmf_trace.0 00:13:19.492 04:16:31 -- common/autotest_common.sh@821 -- # return 0 00:13:19.492 04:16:31 -- fips/fips.sh@16 -- # killprocess 78252 00:13:19.492 04:16:31 -- common/autotest_common.sh@936 -- # '[' -z 78252 ']' 00:13:19.492 04:16:31 -- common/autotest_common.sh@940 -- # kill -0 78252 00:13:19.492 04:16:31 -- common/autotest_common.sh@941 -- # uname 00:13:19.492 04:16:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.492 04:16:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78252 00:13:19.492 killing process with pid 78252 00:13:19.492 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.492 00:13:19.492 Latency(us) 00:13:19.492 [2024-12-06T04:16:32.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.492 [2024-12-06T04:16:32.057Z] =================================================================================================================== 00:13:19.492 [2024-12-06T04:16:32.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.492 04:16:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:19.492 04:16:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:19.492 04:16:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78252' 00:13:19.492 04:16:31 -- common/autotest_common.sh@955 -- # kill 78252 00:13:19.492 04:16:31 -- common/autotest_common.sh@960 -- # wait 78252 00:13:19.752 04:16:32 -- fips/fips.sh@17 -- # nvmftestfini 00:13:19.752 04:16:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:19.752 04:16:32 -- nvmf/common.sh@116 -- # sync 00:13:19.752 04:16:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:19.752 04:16:32 -- nvmf/common.sh@119 -- # set +e 00:13:19.752 04:16:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:19.752 04:16:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:19.752 rmmod nvme_tcp 00:13:19.752 rmmod nvme_fabrics 00:13:19.752 rmmod nvme_keyring 00:13:19.752 04:16:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:19.752 04:16:32 -- nvmf/common.sh@123 -- # set -e 00:13:19.752 04:16:32 -- nvmf/common.sh@124 -- # return 0 00:13:19.752 04:16:32 -- nvmf/common.sh@477 -- # '[' -n 78212 ']' 00:13:19.752 04:16:32 -- nvmf/common.sh@478 -- # killprocess 78212 00:13:19.752 04:16:32 -- common/autotest_common.sh@936 -- # '[' -z 78212 ']' 00:13:19.752 04:16:32 -- common/autotest_common.sh@940 -- # kill -0 78212 00:13:19.752 04:16:32 -- common/autotest_common.sh@941 -- # uname 00:13:19.752 04:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:19.752 04:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78212 00:13:19.752 killing process with pid 78212 00:13:19.752 04:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:19.752 04:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:19.752 04:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78212' 00:13:19.752 04:16:32 -- common/autotest_common.sh@955 -- # kill 78212 00:13:19.752 04:16:32 -- common/autotest_common.sh@960 -- # wait 78212 00:13:20.011 04:16:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:20.011 04:16:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:20.011 04:16:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:20.011 04:16:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.011 04:16:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:20.011 04:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.011 04:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.011 04:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.011 04:16:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:20.011 04:16:32 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:20.011 ************************************ 00:13:20.011 END TEST nvmf_fips 00:13:20.011 ************************************ 00:13:20.011 00:13:20.011 real 0m14.357s 00:13:20.011 user 0m19.486s 00:13:20.011 sys 0m5.864s 00:13:20.011 04:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:20.011 04:16:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.011 04:16:32 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:13:20.011 04:16:32 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:20.011 04:16:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:20.011 04:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.011 04:16:32 -- common/autotest_common.sh@10 -- # set +x 00:13:20.011 ************************************ 00:13:20.011 START TEST nvmf_fuzz 00:13:20.011 ************************************ 00:13:20.011 04:16:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:20.272 * Looking for test storage... 00:13:20.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:20.272 04:16:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:20.272 04:16:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:20.272 04:16:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:20.272 04:16:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:20.272 04:16:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:20.272 04:16:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:20.272 04:16:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:20.272 04:16:32 -- scripts/common.sh@335 -- # IFS=.-: 00:13:20.272 04:16:32 -- scripts/common.sh@335 -- # read -ra ver1 00:13:20.272 04:16:32 -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.272 04:16:32 -- scripts/common.sh@336 -- # read -ra ver2 00:13:20.272 04:16:32 -- scripts/common.sh@337 -- # local 'op=<' 00:13:20.272 04:16:32 -- scripts/common.sh@339 -- # ver1_l=2 00:13:20.272 04:16:32 -- scripts/common.sh@340 -- # ver2_l=1 00:13:20.272 04:16:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:20.272 04:16:32 -- scripts/common.sh@343 -- # case "$op" in 00:13:20.272 04:16:32 -- scripts/common.sh@344 -- # : 1 00:13:20.272 04:16:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:20.272 04:16:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.272 04:16:32 -- scripts/common.sh@364 -- # decimal 1 00:13:20.272 04:16:32 -- scripts/common.sh@352 -- # local d=1 00:13:20.272 04:16:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.272 04:16:32 -- scripts/common.sh@354 -- # echo 1 00:13:20.272 04:16:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:20.272 04:16:32 -- scripts/common.sh@365 -- # decimal 2 00:13:20.272 04:16:32 -- scripts/common.sh@352 -- # local d=2 00:13:20.272 04:16:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.272 04:16:32 -- scripts/common.sh@354 -- # echo 2 00:13:20.272 04:16:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:20.272 04:16:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:20.272 04:16:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:20.272 04:16:32 -- scripts/common.sh@367 -- # return 0 00:13:20.272 04:16:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.272 04:16:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:20.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.272 --rc genhtml_branch_coverage=1 00:13:20.272 --rc genhtml_function_coverage=1 00:13:20.272 --rc genhtml_legend=1 00:13:20.272 --rc geninfo_all_blocks=1 00:13:20.272 --rc geninfo_unexecuted_blocks=1 00:13:20.272 00:13:20.272 ' 00:13:20.272 04:16:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:20.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.272 --rc genhtml_branch_coverage=1 00:13:20.272 --rc genhtml_function_coverage=1 00:13:20.272 --rc genhtml_legend=1 00:13:20.272 --rc geninfo_all_blocks=1 00:13:20.272 --rc geninfo_unexecuted_blocks=1 00:13:20.272 00:13:20.272 ' 00:13:20.272 04:16:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:20.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.272 --rc genhtml_branch_coverage=1 00:13:20.272 --rc genhtml_function_coverage=1 00:13:20.272 --rc genhtml_legend=1 00:13:20.272 --rc geninfo_all_blocks=1 00:13:20.272 --rc geninfo_unexecuted_blocks=1 00:13:20.272 00:13:20.272 ' 00:13:20.272 04:16:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:20.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.272 --rc genhtml_branch_coverage=1 00:13:20.272 --rc genhtml_function_coverage=1 00:13:20.272 --rc genhtml_legend=1 00:13:20.272 --rc geninfo_all_blocks=1 00:13:20.272 --rc geninfo_unexecuted_blocks=1 00:13:20.272 00:13:20.272 ' 00:13:20.272 04:16:32 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:20.272 04:16:32 -- nvmf/common.sh@7 -- # uname -s 00:13:20.272 04:16:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.272 04:16:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.272 04:16:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.272 04:16:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.272 04:16:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.272 04:16:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.272 04:16:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.272 04:16:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.272 04:16:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.272 04:16:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.272 04:16:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:20.272 04:16:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:20.272 04:16:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.272 04:16:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.272 04:16:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:20.272 04:16:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.272 04:16:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.272 04:16:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.272 04:16:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.272 04:16:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.272 04:16:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.272 04:16:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.272 04:16:32 -- paths/export.sh@5 -- # export PATH 00:13:20.272 04:16:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.272 04:16:32 -- nvmf/common.sh@46 -- # : 0 00:13:20.272 04:16:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:20.272 04:16:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:20.272 04:16:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:20.273 04:16:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.273 04:16:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.273 04:16:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:20.273 04:16:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:20.273 04:16:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:20.273 04:16:32 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:13:20.273 04:16:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:20.273 04:16:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.273 04:16:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:20.273 04:16:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:20.273 04:16:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:20.273 04:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.273 04:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.273 04:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.273 04:16:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:20.273 04:16:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:20.273 04:16:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:20.273 04:16:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:20.273 04:16:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:20.273 04:16:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:20.273 04:16:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.273 04:16:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.273 04:16:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:20.273 04:16:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:20.273 04:16:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:20.273 04:16:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:20.273 04:16:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:20.273 04:16:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.273 04:16:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:20.273 04:16:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:20.273 04:16:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:20.273 04:16:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:20.273 04:16:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:20.273 04:16:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:20.273 Cannot find device "nvmf_tgt_br" 00:13:20.273 04:16:32 -- nvmf/common.sh@154 -- # true 00:13:20.273 04:16:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:20.273 Cannot find device "nvmf_tgt_br2" 00:13:20.273 04:16:32 -- nvmf/common.sh@155 -- # true 00:13:20.273 04:16:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:20.273 04:16:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:20.273 Cannot find device "nvmf_tgt_br" 00:13:20.273 04:16:32 -- nvmf/common.sh@157 -- # true 00:13:20.273 04:16:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:20.273 Cannot find device "nvmf_tgt_br2" 00:13:20.273 04:16:32 -- nvmf/common.sh@158 -- # true 00:13:20.273 04:16:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:20.533 04:16:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:20.533 04:16:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:20.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.533 04:16:32 -- nvmf/common.sh@161 -- # true 00:13:20.533 04:16:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:20.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:20.533 04:16:32 -- nvmf/common.sh@162 -- # true 00:13:20.533 04:16:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:20.533 04:16:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:20.533 04:16:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:20.533 04:16:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:20.533 04:16:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:20.533 04:16:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:20.533 04:16:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:20.533 04:16:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:20.533 04:16:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:20.533 04:16:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:20.533 04:16:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:20.533 04:16:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:20.533 04:16:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:20.533 04:16:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:20.533 04:16:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:20.533 04:16:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:20.533 04:16:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:20.533 04:16:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:20.533 04:16:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:20.533 04:16:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:20.533 04:16:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:20.533 04:16:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:20.533 04:16:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:20.533 04:16:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:20.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:20.533 00:13:20.533 --- 10.0.0.2 ping statistics --- 00:13:20.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.533 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:20.533 04:16:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:20.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:20.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:20.533 00:13:20.533 --- 10.0.0.3 ping statistics --- 00:13:20.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.534 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:20.534 04:16:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:20.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:20.534 00:13:20.534 --- 10.0.0.1 ping statistics --- 00:13:20.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.534 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:20.534 04:16:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.534 04:16:33 -- nvmf/common.sh@421 -- # return 0 00:13:20.534 04:16:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:20.534 04:16:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.534 04:16:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:20.534 04:16:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:20.534 04:16:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.534 04:16:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:20.534 04:16:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:20.794 04:16:33 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78583 00:13:20.794 04:16:33 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:20.794 04:16:33 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:20.794 04:16:33 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78583 00:13:20.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.794 04:16:33 -- common/autotest_common.sh@829 -- # '[' -z 78583 ']' 00:13:20.794 04:16:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.794 04:16:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:20.794 04:16:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.794 04:16:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:20.794 04:16:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 04:16:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.731 04:16:34 -- common/autotest_common.sh@862 -- # return 0 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.731 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.731 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:21.731 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.731 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 Malloc0 00:13:21.731 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:21.731 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.731 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.731 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.731 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.731 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.731 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:21.731 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:21.731 04:16:34 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:22.334 Shutting down the fuzz application 00:13:22.334 04:16:34 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:22.334 Shutting down the fuzz application 00:13:22.334 04:16:34 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.334 04:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.334 04:16:34 -- common/autotest_common.sh@10 -- # set +x 00:13:22.334 04:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.334 04:16:34 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:22.334 04:16:34 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:22.334 04:16:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:22.334 04:16:34 -- nvmf/common.sh@116 -- # sync 00:13:22.594 04:16:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:22.594 04:16:34 -- nvmf/common.sh@119 -- # set +e 00:13:22.594 04:16:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:22.594 04:16:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:22.594 rmmod nvme_tcp 00:13:22.594 rmmod nvme_fabrics 00:13:22.594 rmmod nvme_keyring 00:13:22.594 04:16:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:22.594 04:16:34 -- nvmf/common.sh@123 -- # set -e 00:13:22.594 04:16:34 -- nvmf/common.sh@124 -- # return 0 00:13:22.594 04:16:34 -- nvmf/common.sh@477 -- # '[' -n 78583 ']' 00:13:22.594 04:16:34 -- nvmf/common.sh@478 -- # killprocess 78583 00:13:22.594 04:16:34 -- common/autotest_common.sh@936 -- # '[' -z 78583 ']' 00:13:22.594 04:16:34 -- common/autotest_common.sh@940 -- # kill -0 78583 00:13:22.594 04:16:34 -- common/autotest_common.sh@941 -- # uname 00:13:22.594 04:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:22.594 04:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78583 00:13:22.594 killing process with pid 78583 00:13:22.594 04:16:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:22.594 04:16:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:22.594 04:16:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78583' 00:13:22.594 04:16:35 -- common/autotest_common.sh@955 -- # kill 78583 00:13:22.594 04:16:35 -- common/autotest_common.sh@960 -- # wait 78583 00:13:22.852 04:16:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:22.852 04:16:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:22.852 04:16:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:22.852 04:16:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.852 04:16:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:22.852 04:16:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.852 04:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.852 04:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.852 04:16:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:22.852 04:16:35 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:22.852 00:13:22.852 real 0m2.732s 00:13:22.852 user 0m2.844s 00:13:22.852 sys 0m0.641s 00:13:22.852 ************************************ 00:13:22.852 END TEST nvmf_fuzz 00:13:22.852 ************************************ 00:13:22.852 04:16:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.852 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:13:22.852 04:16:35 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:22.852 04:16:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.852 04:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.852 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:13:22.852 ************************************ 00:13:22.852 START TEST nvmf_multiconnection 00:13:22.852 ************************************ 00:13:22.852 04:16:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:22.852 * Looking for test storage... 00:13:22.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:22.852 04:16:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:22.852 04:16:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:22.852 04:16:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:23.110 04:16:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:23.110 04:16:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:23.110 04:16:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:23.110 04:16:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:23.110 04:16:35 -- scripts/common.sh@335 -- # IFS=.-: 00:13:23.110 04:16:35 -- scripts/common.sh@335 -- # read -ra ver1 00:13:23.110 04:16:35 -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.110 04:16:35 -- scripts/common.sh@336 -- # read -ra ver2 00:13:23.110 04:16:35 -- scripts/common.sh@337 -- # local 'op=<' 00:13:23.110 04:16:35 -- scripts/common.sh@339 -- # ver1_l=2 00:13:23.110 04:16:35 -- scripts/common.sh@340 -- # ver2_l=1 00:13:23.110 04:16:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:23.110 04:16:35 -- scripts/common.sh@343 -- # case "$op" in 00:13:23.110 04:16:35 -- scripts/common.sh@344 -- # : 1 00:13:23.110 04:16:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:23.110 04:16:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.110 04:16:35 -- scripts/common.sh@364 -- # decimal 1 00:13:23.110 04:16:35 -- scripts/common.sh@352 -- # local d=1 00:13:23.110 04:16:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.110 04:16:35 -- scripts/common.sh@354 -- # echo 1 00:13:23.110 04:16:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:23.110 04:16:35 -- scripts/common.sh@365 -- # decimal 2 00:13:23.110 04:16:35 -- scripts/common.sh@352 -- # local d=2 00:13:23.110 04:16:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.110 04:16:35 -- scripts/common.sh@354 -- # echo 2 00:13:23.110 04:16:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:23.110 04:16:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:23.110 04:16:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:23.110 04:16:35 -- scripts/common.sh@367 -- # return 0 00:13:23.110 04:16:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.110 04:16:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:23.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.110 --rc genhtml_branch_coverage=1 00:13:23.110 --rc genhtml_function_coverage=1 00:13:23.110 --rc genhtml_legend=1 00:13:23.110 --rc geninfo_all_blocks=1 00:13:23.110 --rc geninfo_unexecuted_blocks=1 00:13:23.110 00:13:23.110 ' 00:13:23.110 04:16:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:23.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.110 --rc genhtml_branch_coverage=1 00:13:23.110 --rc genhtml_function_coverage=1 00:13:23.110 --rc genhtml_legend=1 00:13:23.110 --rc geninfo_all_blocks=1 00:13:23.110 --rc geninfo_unexecuted_blocks=1 00:13:23.110 00:13:23.110 ' 00:13:23.110 04:16:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:23.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.110 --rc genhtml_branch_coverage=1 00:13:23.110 --rc genhtml_function_coverage=1 00:13:23.110 --rc genhtml_legend=1 00:13:23.110 --rc geninfo_all_blocks=1 00:13:23.110 --rc geninfo_unexecuted_blocks=1 00:13:23.110 00:13:23.110 ' 00:13:23.110 04:16:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:23.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.110 --rc genhtml_branch_coverage=1 00:13:23.110 --rc genhtml_function_coverage=1 00:13:23.110 --rc genhtml_legend=1 00:13:23.110 --rc geninfo_all_blocks=1 00:13:23.110 --rc geninfo_unexecuted_blocks=1 00:13:23.110 00:13:23.110 ' 00:13:23.110 04:16:35 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:23.110 04:16:35 -- nvmf/common.sh@7 -- # uname -s 00:13:23.110 04:16:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.110 04:16:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.110 04:16:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.110 04:16:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.110 04:16:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.110 04:16:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.110 04:16:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.110 04:16:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.110 04:16:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.110 04:16:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.110 04:16:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:23.110 04:16:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:13:23.110 04:16:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.110 04:16:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.110 04:16:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:23.110 04:16:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:23.110 04:16:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.110 04:16:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.111 04:16:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.111 04:16:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.111 04:16:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.111 04:16:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.111 04:16:35 -- paths/export.sh@5 -- # export PATH 00:13:23.111 04:16:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.111 04:16:35 -- nvmf/common.sh@46 -- # : 0 00:13:23.111 04:16:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.111 04:16:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.111 04:16:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.111 04:16:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.111 04:16:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.111 04:16:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.111 04:16:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.111 04:16:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.111 04:16:35 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.111 04:16:35 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.111 04:16:35 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:23.111 04:16:35 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:23.111 04:16:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.111 04:16:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.111 04:16:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.111 04:16:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.111 04:16:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.111 04:16:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.111 04:16:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.111 04:16:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.111 04:16:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:23.111 04:16:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:23.111 04:16:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:23.111 04:16:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:23.111 04:16:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:23.111 04:16:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:23.111 04:16:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.111 04:16:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.111 04:16:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:23.111 04:16:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:23.111 04:16:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:23.111 04:16:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:23.111 04:16:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:23.111 04:16:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.111 04:16:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:23.111 04:16:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:23.111 04:16:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:23.111 04:16:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:23.111 04:16:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:23.111 04:16:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:23.111 Cannot find device "nvmf_tgt_br" 00:13:23.111 04:16:35 -- nvmf/common.sh@154 -- # true 00:13:23.111 04:16:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:23.111 Cannot find device "nvmf_tgt_br2" 00:13:23.111 04:16:35 -- nvmf/common.sh@155 -- # true 00:13:23.111 04:16:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:23.111 04:16:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:23.111 Cannot find device "nvmf_tgt_br" 00:13:23.111 04:16:35 -- nvmf/common.sh@157 -- # true 00:13:23.111 04:16:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:23.111 Cannot find device "nvmf_tgt_br2" 00:13:23.111 04:16:35 -- nvmf/common.sh@158 -- # true 00:13:23.111 04:16:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:23.111 04:16:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:23.369 04:16:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:23.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.369 04:16:35 -- nvmf/common.sh@161 -- # true 00:13:23.369 04:16:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:23.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:23.369 04:16:35 -- nvmf/common.sh@162 -- # true 00:13:23.369 04:16:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:23.369 04:16:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:23.369 04:16:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:23.369 04:16:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:23.369 04:16:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:23.369 04:16:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:23.369 04:16:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:23.369 04:16:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:23.369 04:16:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:23.369 04:16:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:23.369 04:16:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:23.369 04:16:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:23.369 04:16:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:23.369 04:16:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:23.369 04:16:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:23.369 04:16:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:23.369 04:16:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:23.369 04:16:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:23.369 04:16:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:23.369 04:16:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:23.369 04:16:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:23.369 04:16:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:23.369 04:16:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:23.369 04:16:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:23.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:23.369 00:13:23.369 --- 10.0.0.2 ping statistics --- 00:13:23.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.369 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:23.369 04:16:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:23.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:23.369 00:13:23.369 --- 10.0.0.3 ping statistics --- 00:13:23.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.369 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:23.369 04:16:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:23.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:23.369 00:13:23.369 --- 10.0.0.1 ping statistics --- 00:13:23.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.369 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:23.369 04:16:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.369 04:16:35 -- nvmf/common.sh@421 -- # return 0 00:13:23.369 04:16:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:23.369 04:16:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.369 04:16:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:23.369 04:16:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:23.369 04:16:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.369 04:16:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:23.369 04:16:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:23.369 04:16:35 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:23.369 04:16:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:23.369 04:16:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:23.369 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:13:23.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.628 04:16:35 -- nvmf/common.sh@469 -- # nvmfpid=78779 00:13:23.628 04:16:35 -- nvmf/common.sh@470 -- # waitforlisten 78779 00:13:23.628 04:16:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.628 04:16:35 -- common/autotest_common.sh@829 -- # '[' -z 78779 ']' 00:13:23.628 04:16:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.628 04:16:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.628 04:16:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.628 04:16:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.628 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:13:23.628 [2024-12-06 04:16:35.988867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:23.628 [2024-12-06 04:16:35.988965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.628 [2024-12-06 04:16:36.128653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.886 [2024-12-06 04:16:36.193063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:23.886 [2024-12-06 04:16:36.193519] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.886 [2024-12-06 04:16:36.193646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.886 [2024-12-06 04:16:36.193815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.886 [2024-12-06 04:16:36.194056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.886 [2024-12-06 04:16:36.194131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.886 [2024-12-06 04:16:36.194190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.886 [2024-12-06 04:16:36.194205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.453 04:16:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.453 04:16:36 -- common/autotest_common.sh@862 -- # return 0 00:13:24.453 04:16:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:24.453 04:16:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.453 04:16:36 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.713 04:16:37 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 [2024-12-06 04:16:37.035852] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.713 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 Malloc1 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 [2024-12-06 04:16:37.121324] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.713 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 Malloc2 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.713 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 Malloc3 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.713 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 Malloc4 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.713 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.713 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.713 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:24.713 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.713 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 Malloc5 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.972 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:24.972 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.972 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.972 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:24.972 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.972 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.972 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:24.972 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.972 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.972 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.972 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:24.972 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.972 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 Malloc6 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.972 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:24.972 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.972 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.972 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.973 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 Malloc7 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.973 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 Malloc8 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.973 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 Malloc9 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:24.973 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.973 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:24.973 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:24.973 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.973 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 Malloc10 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:25.232 04:16:37 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 Malloc11 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:25.232 04:16:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.232 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:13:25.232 04:16:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.232 04:16:37 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:25.232 04:16:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:25.232 04:16:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.232 04:16:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:25.232 04:16:37 -- common/autotest_common.sh@1187 -- # local i=0 00:13:25.232 04:16:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.232 04:16:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:25.232 04:16:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:27.764 04:16:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:27.764 04:16:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:27.764 04:16:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:13:27.764 04:16:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:27.764 04:16:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.764 04:16:39 -- common/autotest_common.sh@1197 -- # return 0 00:13:27.764 04:16:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.764 04:16:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:27.764 04:16:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:27.764 04:16:39 -- common/autotest_common.sh@1187 -- # local i=0 00:13:27.764 04:16:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.764 04:16:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:27.764 04:16:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:29.669 04:16:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:29.669 04:16:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:29.669 04:16:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:13:29.669 04:16:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:29.669 04:16:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.669 04:16:41 -- common/autotest_common.sh@1197 -- # return 0 00:13:29.669 04:16:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.669 04:16:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:29.669 04:16:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:29.669 04:16:42 -- common/autotest_common.sh@1187 -- # local i=0 00:13:29.669 04:16:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.669 04:16:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:29.669 04:16:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:31.570 04:16:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:31.570 04:16:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:31.570 04:16:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:13:31.570 04:16:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:31.570 04:16:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.570 04:16:44 -- common/autotest_common.sh@1197 -- # return 0 00:13:31.570 04:16:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:31.570 04:16:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:31.828 04:16:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:31.828 04:16:44 -- common/autotest_common.sh@1187 -- # local i=0 00:13:31.828 04:16:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.828 04:16:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:31.828 04:16:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:33.724 04:16:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:33.724 04:16:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:33.724 04:16:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:13:33.724 04:16:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:33.724 04:16:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.724 04:16:46 -- common/autotest_common.sh@1197 -- # return 0 00:13:33.724 04:16:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:33.724 04:16:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:33.982 04:16:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:33.982 04:16:46 -- common/autotest_common.sh@1187 -- # local i=0 00:13:33.982 04:16:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.982 04:16:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:33.982 04:16:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:35.886 04:16:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:35.886 04:16:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:35.886 04:16:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:13:35.886 04:16:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:35.886 04:16:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.886 04:16:48 -- common/autotest_common.sh@1197 -- # return 0 00:13:35.886 04:16:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:35.886 04:16:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:36.145 04:16:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:36.145 04:16:48 -- common/autotest_common.sh@1187 -- # local i=0 00:13:36.145 04:16:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.145 04:16:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:36.145 04:16:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:38.048 04:16:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:38.048 04:16:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:38.048 04:16:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:13:38.048 04:16:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:38.048 04:16:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.048 04:16:50 -- common/autotest_common.sh@1197 -- # return 0 00:13:38.048 04:16:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:38.048 04:16:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:38.308 04:16:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:38.308 04:16:50 -- common/autotest_common.sh@1187 -- # local i=0 00:13:38.308 04:16:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.308 04:16:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:38.308 04:16:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:40.214 04:16:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:40.214 04:16:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:40.214 04:16:52 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:13:40.214 04:16:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:40.214 04:16:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.214 04:16:52 -- common/autotest_common.sh@1197 -- # return 0 00:13:40.214 04:16:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:40.214 04:16:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:40.473 04:16:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:40.473 04:16:52 -- common/autotest_common.sh@1187 -- # local i=0 00:13:40.473 04:16:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.473 04:16:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:40.473 04:16:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:42.375 04:16:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:42.375 04:16:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:42.375 04:16:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:13:42.637 04:16:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:42.637 04:16:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.637 04:16:54 -- common/autotest_common.sh@1197 -- # return 0 00:13:42.637 04:16:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:42.637 04:16:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:42.637 04:16:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:42.637 04:16:55 -- common/autotest_common.sh@1187 -- # local i=0 00:13:42.637 04:16:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.637 04:16:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:42.637 04:16:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:44.622 04:16:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:44.622 04:16:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:44.622 04:16:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:13:44.622 04:16:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:44.622 04:16:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.622 04:16:57 -- common/autotest_common.sh@1197 -- # return 0 00:13:44.622 04:16:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:44.622 04:16:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:44.881 04:16:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:44.881 04:16:57 -- common/autotest_common.sh@1187 -- # local i=0 00:13:44.881 04:16:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.881 04:16:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:44.881 04:16:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:46.783 04:16:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:46.783 04:16:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:46.783 04:16:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:13:46.783 04:16:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:46.783 04:16:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.783 04:16:59 -- common/autotest_common.sh@1197 -- # return 0 00:13:46.783 04:16:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.783 04:16:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:47.042 04:16:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:47.042 04:16:59 -- common/autotest_common.sh@1187 -- # local i=0 00:13:47.042 04:16:59 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.042 04:16:59 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:47.042 04:16:59 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:48.940 04:17:01 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:48.940 04:17:01 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:48.940 04:17:01 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:13:48.940 04:17:01 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:48.940 04:17:01 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.940 04:17:01 -- common/autotest_common.sh@1197 -- # return 0 00:13:48.940 04:17:01 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:48.940 [global] 00:13:48.940 thread=1 00:13:48.940 invalidate=1 00:13:48.940 rw=read 00:13:48.940 time_based=1 00:13:48.940 runtime=10 00:13:48.940 ioengine=libaio 00:13:48.940 direct=1 00:13:48.940 bs=262144 00:13:48.940 iodepth=64 00:13:48.940 norandommap=1 00:13:48.940 numjobs=1 00:13:48.940 00:13:48.940 [job0] 00:13:48.940 filename=/dev/nvme0n1 00:13:48.940 [job1] 00:13:48.940 filename=/dev/nvme10n1 00:13:48.940 [job2] 00:13:48.940 filename=/dev/nvme1n1 00:13:48.940 [job3] 00:13:48.940 filename=/dev/nvme2n1 00:13:48.940 [job4] 00:13:48.940 filename=/dev/nvme3n1 00:13:49.198 [job5] 00:13:49.198 filename=/dev/nvme4n1 00:13:49.198 [job6] 00:13:49.198 filename=/dev/nvme5n1 00:13:49.198 [job7] 00:13:49.198 filename=/dev/nvme6n1 00:13:49.198 [job8] 00:13:49.198 filename=/dev/nvme7n1 00:13:49.198 [job9] 00:13:49.198 filename=/dev/nvme8n1 00:13:49.198 [job10] 00:13:49.198 filename=/dev/nvme9n1 00:13:49.198 Could not set queue depth (nvme0n1) 00:13:49.198 Could not set queue depth (nvme10n1) 00:13:49.198 Could not set queue depth (nvme1n1) 00:13:49.198 Could not set queue depth (nvme2n1) 00:13:49.198 Could not set queue depth (nvme3n1) 00:13:49.198 Could not set queue depth (nvme4n1) 00:13:49.198 Could not set queue depth (nvme5n1) 00:13:49.198 Could not set queue depth (nvme6n1) 00:13:49.198 Could not set queue depth (nvme7n1) 00:13:49.198 Could not set queue depth (nvme8n1) 00:13:49.198 Could not set queue depth (nvme9n1) 00:13:49.457 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:49.457 fio-3.35 00:13:49.457 Starting 11 threads 00:14:01.773 00:14:01.773 job0: (groupid=0, jobs=1): err= 0: pid=79238: Fri Dec 6 04:17:12 2024 00:14:01.773 read: IOPS=418, BW=105MiB/s (110MB/s)(1058MiB/10102msec) 00:14:01.773 slat (usec): min=16, max=78371, avg=2358.77, stdev=5675.37 00:14:01.773 clat (msec): min=54, max=277, avg=150.22, stdev=25.09 00:14:01.773 lat (msec): min=54, max=277, avg=152.57, stdev=25.71 00:14:01.773 clat percentiles (msec): 00:14:01.773 | 1.00th=[ 99], 5.00th=[ 129], 10.00th=[ 132], 20.00th=[ 136], 00:14:01.773 | 30.00th=[ 138], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:14:01.773 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 199], 95.00th=[ 211], 00:14:01.773 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 245], 99.95th=[ 249], 00:14:01.773 | 99.99th=[ 279] 00:14:01.773 bw ( KiB/s): min=74240, max=120320, per=5.63%, avg=106726.40, stdev=13745.91, samples=20 00:14:01.773 iops : min= 290, max= 470, avg=416.90, stdev=53.69, samples=20 00:14:01.773 lat (msec) : 100=1.16%, 250=98.82%, 500=0.02% 00:14:01.773 cpu : usr=0.18%, sys=1.47%, ctx=1014, majf=0, minf=4097 00:14:01.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:01.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.773 issued rwts: total=4232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.773 job1: (groupid=0, jobs=1): err= 0: pid=79239: Fri Dec 6 04:17:12 2024 00:14:01.773 read: IOPS=746, BW=187MiB/s (196MB/s)(1869MiB/10019msec) 00:14:01.773 slat (usec): min=15, max=54050, avg=1332.68, stdev=3009.53 00:14:01.773 clat (msec): min=17, max=163, avg=84.30, stdev=21.44 00:14:01.773 lat (msec): min=26, max=163, avg=85.63, stdev=21.75 00:14:01.773 clat percentiles (msec): 00:14:01.773 | 1.00th=[ 48], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 59], 00:14:01.773 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 90], 00:14:01.773 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 121], 00:14:01.773 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 155], 00:14:01.773 | 99.99th=[ 163] 00:14:01.774 bw ( KiB/s): min=118784, max=286720, per=10.01%, avg=189745.10, stdev=48075.89, samples=20 00:14:01.774 iops : min= 464, max= 1120, avg=741.15, stdev=187.81, samples=20 00:14:01.774 lat (msec) : 20=0.01%, 50=1.35%, 100=81.05%, 250=17.59% 00:14:01.774 cpu : usr=0.38%, sys=2.89%, ctx=1688, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.774 issued rwts: total=7477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.774 job2: (groupid=0, jobs=1): err= 0: pid=79240: Fri Dec 6 04:17:12 2024 00:14:01.774 read: IOPS=415, BW=104MiB/s (109MB/s)(1049MiB/10102msec) 00:14:01.774 slat (usec): min=20, max=94342, avg=2380.71, stdev=6658.31 00:14:01.774 clat (msec): min=97, max=260, avg=151.56, stdev=23.74 00:14:01.774 lat (msec): min=110, max=262, avg=153.94, stdev=24.57 00:14:01.774 clat percentiles (msec): 00:14:01.774 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 138], 00:14:01.774 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:14:01.774 | 70.00th=[ 150], 80.00th=[ 161], 90.00th=[ 201], 95.00th=[ 211], 00:14:01.774 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 239], 99.95th=[ 241], 00:14:01.774 | 99.99th=[ 262] 00:14:01.774 bw ( KiB/s): min=71168, max=122880, per=5.58%, avg=105729.65, stdev=15080.91, samples=20 00:14:01.774 iops : min= 278, max= 480, avg=412.95, stdev=58.92, samples=20 00:14:01.774 lat (msec) : 100=0.02%, 250=99.93%, 500=0.05% 00:14:01.774 cpu : usr=0.22%, sys=1.54%, ctx=1055, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.774 issued rwts: total=4194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.774 job3: (groupid=0, jobs=1): err= 0: pid=79241: Fri Dec 6 04:17:12 2024 00:14:01.774 read: IOPS=986, BW=247MiB/s (259MB/s)(2489MiB/10089msec) 00:14:01.774 slat (usec): min=16, max=58498, avg=992.36, stdev=2516.71 00:14:01.774 clat (usec): min=590, max=200241, avg=63781.01, stdev=24766.27 00:14:01.774 lat (usec): min=644, max=200284, avg=64773.36, stdev=25089.50 00:14:01.774 clat percentiles (msec): 00:14:01.774 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:14:01.774 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:14:01.774 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 104], 95.00th=[ 115], 00:14:01.774 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 180], 99.95th=[ 199], 00:14:01.774 | 99.99th=[ 201] 00:14:01.774 bw ( KiB/s): min=139264, max=431616, per=13.36%, avg=253299.90, stdev=83682.14, samples=20 00:14:01.774 iops : min= 544, max= 1686, avg=989.45, stdev=326.88, samples=20 00:14:01.774 lat (usec) : 750=0.01%, 1000=0.02% 00:14:01.774 lat (msec) : 2=0.11%, 4=0.02%, 10=0.31%, 20=0.13%, 50=27.77% 00:14:01.774 lat (msec) : 100=60.60%, 250=11.03% 00:14:01.774 cpu : usr=0.40%, sys=3.21%, ctx=2101, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.774 issued rwts: total=9956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.774 job4: (groupid=0, jobs=1): err= 0: pid=79242: Fri Dec 6 04:17:12 2024 00:14:01.774 read: IOPS=641, BW=160MiB/s (168MB/s)(1618MiB/10088msec) 00:14:01.774 slat (usec): min=20, max=71531, avg=1529.06, stdev=3494.02 00:14:01.774 clat (msec): min=51, max=192, avg=98.06, stdev=15.96 00:14:01.774 lat (msec): min=53, max=192, avg=99.59, stdev=16.17 00:14:01.774 clat percentiles (msec): 00:14:01.774 | 1.00th=[ 74], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:14:01.774 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 99], 00:14:01.774 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 130], 00:14:01.774 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 188], 99.95th=[ 190], 00:14:01.774 | 99.99th=[ 192] 00:14:01.774 bw ( KiB/s): min=119808, max=186368, per=8.65%, avg=164054.80, stdev=20489.47, samples=20 00:14:01.774 iops : min= 468, max= 728, avg=640.80, stdev=80.05, samples=20 00:14:01.774 lat (msec) : 100=64.08%, 250=35.92% 00:14:01.774 cpu : usr=0.41%, sys=2.62%, ctx=1475, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.774 issued rwts: total=6472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.774 job5: (groupid=0, jobs=1): err= 0: pid=79243: Fri Dec 6 04:17:12 2024 00:14:01.774 read: IOPS=1352, BW=338MiB/s (355MB/s)(3410MiB/10084msec) 00:14:01.774 slat (usec): min=14, max=46606, avg=723.20, stdev=1907.04 00:14:01.774 clat (msec): min=2, max=192, avg=46.52, stdev=26.27 00:14:01.774 lat (msec): min=2, max=195, avg=47.24, stdev=26.64 00:14:01.774 clat percentiles (msec): 00:14:01.774 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:14:01.774 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 35], 00:14:01.774 | 70.00th=[ 37], 80.00th=[ 69], 90.00th=[ 95], 95.00th=[ 109], 00:14:01.774 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 167], 99.95th=[ 190], 00:14:01.774 | 99.99th=[ 192] 00:14:01.774 bw ( KiB/s): min=139776, max=512000, per=18.33%, avg=347512.75, stdev=155868.96, samples=20 00:14:01.774 iops : min= 546, max= 2000, avg=1357.40, stdev=608.93, samples=20 00:14:01.774 lat (msec) : 4=0.02%, 10=0.10%, 20=0.25%, 50=74.37%, 100=16.80% 00:14:01.774 lat (msec) : 250=8.45% 00:14:01.774 cpu : usr=0.56%, sys=3.86%, ctx=2972, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.774 issued rwts: total=13639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.774 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.774 job6: (groupid=0, jobs=1): err= 0: pid=79246: Fri Dec 6 04:17:12 2024 00:14:01.774 read: IOPS=416, BW=104MiB/s (109MB/s)(1052MiB/10110msec) 00:14:01.774 slat (usec): min=15, max=92661, avg=2375.11, stdev=6056.85 00:14:01.774 clat (msec): min=49, max=266, avg=151.24, stdev=24.47 00:14:01.774 lat (msec): min=50, max=283, avg=153.61, stdev=25.16 00:14:01.774 clat percentiles (msec): 00:14:01.774 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 134], 20.00th=[ 138], 00:14:01.774 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:14:01.774 | 70.00th=[ 150], 80.00th=[ 159], 90.00th=[ 201], 95.00th=[ 211], 00:14:01.774 | 99.00th=[ 222], 99.50th=[ 226], 99.90th=[ 241], 99.95th=[ 262], 00:14:01.774 | 99.99th=[ 268] 00:14:01.774 bw ( KiB/s): min=73216, max=122880, per=5.59%, avg=106060.80, stdev=14618.36, samples=20 00:14:01.774 iops : min= 286, max= 480, avg=414.30, stdev=57.10, samples=20 00:14:01.774 lat (msec) : 50=0.02%, 100=0.40%, 250=99.48%, 500=0.10% 00:14:01.774 cpu : usr=0.15%, sys=1.79%, ctx=1008, majf=0, minf=4097 00:14:01.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:01.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.775 issued rwts: total=4206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.775 job7: (groupid=0, jobs=1): err= 0: pid=79250: Fri Dec 6 04:17:12 2024 00:14:01.775 read: IOPS=874, BW=219MiB/s (229MB/s)(2205MiB/10086msec) 00:14:01.775 slat (usec): min=15, max=31201, avg=1127.35, stdev=2668.42 00:14:01.775 clat (msec): min=3, max=201, avg=71.96, stdev=19.64 00:14:01.775 lat (msec): min=3, max=201, avg=73.09, stdev=19.88 00:14:01.775 clat percentiles (msec): 00:14:01.775 | 1.00th=[ 26], 5.00th=[ 53], 10.00th=[ 57], 20.00th=[ 60], 00:14:01.775 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 70], 00:14:01.775 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 105], 95.00th=[ 114], 00:14:01.775 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 190], 99.95th=[ 190], 00:14:01.775 | 99.99th=[ 201] 00:14:01.775 bw ( KiB/s): min=140288, max=294989, per=11.83%, avg=224208.65, stdev=46172.63, samples=20 00:14:01.775 iops : min= 548, max= 1152, avg=875.80, stdev=180.34, samples=20 00:14:01.775 lat (msec) : 4=0.01%, 20=0.48%, 50=3.24%, 100=84.21%, 250=12.06% 00:14:01.775 cpu : usr=0.28%, sys=2.92%, ctx=1837, majf=0, minf=4098 00:14:01.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:01.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.775 issued rwts: total=8820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.775 job8: (groupid=0, jobs=1): err= 0: pid=79252: Fri Dec 6 04:17:12 2024 00:14:01.775 read: IOPS=747, BW=187MiB/s (196MB/s)(1872MiB/10019msec) 00:14:01.775 slat (usec): min=20, max=33322, avg=1332.24, stdev=3010.03 00:14:01.775 clat (msec): min=11, max=160, avg=84.20, stdev=21.21 00:14:01.775 lat (msec): min=21, max=160, avg=85.53, stdev=21.53 00:14:01.775 clat percentiles (msec): 00:14:01.775 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:14:01.775 | 30.00th=[ 79], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 90], 00:14:01.775 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 120], 00:14:01.775 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 159], 00:14:01.775 | 99.99th=[ 161] 00:14:01.775 bw ( KiB/s): min=114176, max=284672, per=10.02%, avg=190002.95, stdev=47318.68, samples=20 00:14:01.775 iops : min= 446, max= 1112, avg=742.15, stdev=184.76, samples=20 00:14:01.775 lat (msec) : 20=0.01%, 50=1.04%, 100=81.58%, 250=17.37% 00:14:01.775 cpu : usr=0.36%, sys=2.84%, ctx=1648, majf=0, minf=4097 00:14:01.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:01.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.775 issued rwts: total=7486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.775 job9: (groupid=0, jobs=1): err= 0: pid=79253: Fri Dec 6 04:17:12 2024 00:14:01.775 read: IOPS=414, BW=104MiB/s (109MB/s)(1048MiB/10104msec) 00:14:01.775 slat (usec): min=15, max=108508, avg=2383.58, stdev=6048.71 00:14:01.775 clat (msec): min=55, max=275, avg=151.69, stdev=25.32 00:14:01.775 lat (msec): min=55, max=275, avg=154.07, stdev=25.96 00:14:01.775 clat percentiles (msec): 00:14:01.775 | 1.00th=[ 121], 5.00th=[ 131], 10.00th=[ 134], 20.00th=[ 138], 00:14:01.775 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:14:01.775 | 70.00th=[ 150], 80.00th=[ 161], 90.00th=[ 201], 95.00th=[ 211], 00:14:01.775 | 99.00th=[ 224], 99.50th=[ 239], 99.90th=[ 251], 99.95th=[ 266], 00:14:01.775 | 99.99th=[ 275] 00:14:01.775 bw ( KiB/s): min=73728, max=120832, per=5.57%, avg=105651.20, stdev=13966.76, samples=20 00:14:01.775 iops : min= 288, max= 472, avg=412.70, stdev=54.56, samples=20 00:14:01.775 lat (msec) : 100=0.79%, 250=99.07%, 500=0.14% 00:14:01.775 cpu : usr=0.16%, sys=1.39%, ctx=1012, majf=0, minf=4097 00:14:01.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:01.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.775 issued rwts: total=4190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.775 job10: (groupid=0, jobs=1): err= 0: pid=79254: Fri Dec 6 04:17:12 2024 00:14:01.775 read: IOPS=414, BW=104MiB/s (109MB/s)(1048MiB/10104msec) 00:14:01.775 slat (usec): min=20, max=99657, avg=2381.21, stdev=6136.30 00:14:01.775 clat (msec): min=65, max=272, avg=151.61, stdev=26.14 00:14:01.775 lat (msec): min=65, max=273, avg=153.99, stdev=26.82 00:14:01.775 clat percentiles (msec): 00:14:01.775 | 1.00th=[ 75], 5.00th=[ 130], 10.00th=[ 134], 20.00th=[ 138], 00:14:01.775 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:14:01.775 | 70.00th=[ 150], 80.00th=[ 161], 90.00th=[ 203], 95.00th=[ 211], 00:14:01.775 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 251], 99.95th=[ 268], 00:14:01.775 | 99.99th=[ 275] 00:14:01.775 bw ( KiB/s): min=71680, max=119808, per=5.58%, avg=105702.40, stdev=13685.55, samples=20 00:14:01.775 iops : min= 280, max= 468, avg=412.90, stdev=53.46, samples=20 00:14:01.775 lat (msec) : 100=1.26%, 250=98.62%, 500=0.12% 00:14:01.775 cpu : usr=0.23%, sys=1.47%, ctx=1017, majf=0, minf=4097 00:14:01.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:14:01.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:01.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:01.775 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:01.775 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:01.775 00:14:01.775 Run status group 0 (all jobs): 00:14:01.775 READ: bw=1851MiB/s (1941MB/s), 104MiB/s-338MiB/s (109MB/s-355MB/s), io=18.3GiB (19.6GB), run=10019-10110msec 00:14:01.775 00:14:01.775 Disk stats (read/write): 00:14:01.775 nvme0n1: ios=8352/0, merge=0/0, ticks=1227137/0, in_queue=1227137, util=97.87% 00:14:01.775 nvme10n1: ios=14884/0, merge=0/0, ticks=1237240/0, in_queue=1237240, util=97.93% 00:14:01.775 nvme1n1: ios=8279/0, merge=0/0, ticks=1228653/0, in_queue=1228653, util=98.20% 00:14:01.775 nvme2n1: ios=19796/0, merge=0/0, ticks=1234239/0, in_queue=1234239, util=98.40% 00:14:01.775 nvme3n1: ios=12837/0, merge=0/0, ticks=1231614/0, in_queue=1231614, util=98.37% 00:14:01.775 nvme4n1: ios=27167/0, merge=0/0, ticks=1236024/0, in_queue=1236024, util=98.53% 00:14:01.775 nvme5n1: ios=8306/0, merge=0/0, ticks=1231747/0, in_queue=1231747, util=98.76% 00:14:01.775 nvme6n1: ios=17533/0, merge=0/0, ticks=1235233/0, in_queue=1235233, util=98.84% 00:14:01.775 nvme7n1: ios=14327/0, merge=0/0, ticks=1205164/0, in_queue=1205164, util=98.94% 00:14:01.775 nvme8n1: ios=8281/0, merge=0/0, ticks=1227949/0, in_queue=1227949, util=99.04% 00:14:01.775 nvme9n1: ios=8291/0, merge=0/0, ticks=1229462/0, in_queue=1229462, util=99.15% 00:14:01.775 04:17:12 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:14:01.775 [global] 00:14:01.775 thread=1 00:14:01.775 invalidate=1 00:14:01.775 rw=randwrite 00:14:01.775 time_based=1 00:14:01.775 runtime=10 00:14:01.775 ioengine=libaio 00:14:01.775 direct=1 00:14:01.775 bs=262144 00:14:01.775 iodepth=64 00:14:01.775 norandommap=1 00:14:01.775 numjobs=1 00:14:01.775 00:14:01.775 [job0] 00:14:01.775 filename=/dev/nvme0n1 00:14:01.775 [job1] 00:14:01.775 filename=/dev/nvme10n1 00:14:01.775 [job2] 00:14:01.775 filename=/dev/nvme1n1 00:14:01.775 [job3] 00:14:01.775 filename=/dev/nvme2n1 00:14:01.775 [job4] 00:14:01.775 filename=/dev/nvme3n1 00:14:01.775 [job5] 00:14:01.775 filename=/dev/nvme4n1 00:14:01.775 [job6] 00:14:01.776 filename=/dev/nvme5n1 00:14:01.776 [job7] 00:14:01.776 filename=/dev/nvme6n1 00:14:01.776 [job8] 00:14:01.776 filename=/dev/nvme7n1 00:14:01.776 [job9] 00:14:01.776 filename=/dev/nvme8n1 00:14:01.776 [job10] 00:14:01.776 filename=/dev/nvme9n1 00:14:01.776 Could not set queue depth (nvme0n1) 00:14:01.776 Could not set queue depth (nvme10n1) 00:14:01.776 Could not set queue depth (nvme1n1) 00:14:01.776 Could not set queue depth (nvme2n1) 00:14:01.776 Could not set queue depth (nvme3n1) 00:14:01.776 Could not set queue depth (nvme4n1) 00:14:01.776 Could not set queue depth (nvme5n1) 00:14:01.776 Could not set queue depth (nvme6n1) 00:14:01.776 Could not set queue depth (nvme7n1) 00:14:01.776 Could not set queue depth (nvme8n1) 00:14:01.776 Could not set queue depth (nvme9n1) 00:14:01.776 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:01.776 fio-3.35 00:14:01.776 Starting 11 threads 00:14:11.790 00:14:11.790 job0: (groupid=0, jobs=1): err= 0: pid=79453: Fri Dec 6 04:17:23 2024 00:14:11.790 write: IOPS=265, BW=66.3MiB/s (69.5MB/s)(675MiB/10181msec); 0 zone resets 00:14:11.790 slat (usec): min=23, max=126347, avg=3699.57, stdev=6787.39 00:14:11.790 clat (msec): min=132, max=414, avg=237.47, stdev=20.45 00:14:11.790 lat (msec): min=132, max=414, avg=241.17, stdev=19.66 00:14:11.790 clat percentiles (msec): 00:14:11.790 | 1.00th=[ 197], 5.00th=[ 218], 10.00th=[ 220], 20.00th=[ 226], 00:14:11.790 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 236], 00:14:11.790 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 264], 00:14:11.790 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 414], 00:14:11.790 | 99.99th=[ 414] 00:14:11.790 bw ( KiB/s): min=53141, max=71680, per=5.96%, avg=67501.85, stdev=4537.44, samples=20 00:14:11.790 iops : min= 207, max= 280, avg=263.65, stdev=17.82, samples=20 00:14:11.790 lat (msec) : 250=84.48%, 500=15.52% 00:14:11.790 cpu : usr=0.60%, sys=0.72%, ctx=2831, majf=0, minf=1 00:14:11.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:14:11.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.790 issued rwts: total=0,2700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.790 job1: (groupid=0, jobs=1): err= 0: pid=79454: Fri Dec 6 04:17:23 2024 00:14:11.790 write: IOPS=267, BW=66.9MiB/s (70.1MB/s)(682MiB/10197msec); 0 zone resets 00:14:11.790 slat (usec): min=23, max=71741, avg=3661.47, stdev=6502.07 00:14:11.790 clat (msec): min=20, max=429, avg=235.44, stdev=33.02 00:14:11.790 lat (msec): min=20, max=429, avg=239.10, stdev=32.91 00:14:11.790 clat percentiles (msec): 00:14:11.790 | 1.00th=[ 56], 5.00th=[ 213], 10.00th=[ 220], 20.00th=[ 226], 00:14:11.790 | 30.00th=[ 232], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 239], 00:14:11.790 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 257], 95.00th=[ 268], 00:14:11.790 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 430], 00:14:11.790 | 99.99th=[ 430] 00:14:11.790 bw ( KiB/s): min=61440, max=73875, per=6.03%, avg=68224.40, stdev=3273.32, samples=20 00:14:11.790 iops : min= 240, max= 288, avg=266.45, stdev=12.73, samples=20 00:14:11.790 lat (msec) : 50=0.88%, 100=0.88%, 250=78.04%, 500=20.20% 00:14:11.790 cpu : usr=0.67%, sys=0.88%, ctx=3554, majf=0, minf=1 00:14:11.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:14:11.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.790 issued rwts: total=0,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.790 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.790 job2: (groupid=0, jobs=1): err= 0: pid=79466: Fri Dec 6 04:17:23 2024 00:14:11.790 write: IOPS=266, BW=66.5MiB/s (69.7MB/s)(678MiB/10197msec); 0 zone resets 00:14:11.790 slat (usec): min=18, max=35959, avg=3683.25, stdev=6489.66 00:14:11.790 clat (msec): min=38, max=429, avg=236.76, stdev=31.49 00:14:11.790 lat (msec): min=38, max=429, avg=240.44, stdev=31.34 00:14:11.790 clat percentiles (msec): 00:14:11.790 | 1.00th=[ 81], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 224], 00:14:11.791 | 30.00th=[ 230], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 239], 00:14:11.791 | 70.00th=[ 247], 80.00th=[ 259], 90.00th=[ 264], 95.00th=[ 266], 00:14:11.791 | 99.00th=[ 326], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 430], 00:14:11.791 | 99.99th=[ 430] 00:14:11.791 bw ( KiB/s): min=61440, max=75776, per=5.99%, avg=67847.20, stdev=4231.62, samples=20 00:14:11.791 iops : min= 240, max= 296, avg=265.00, stdev=16.50, samples=20 00:14:11.791 lat (msec) : 50=0.29%, 100=1.03%, 250=72.28%, 500=26.39% 00:14:11.791 cpu : usr=0.54%, sys=0.77%, ctx=2672, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,2713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job3: (groupid=0, jobs=1): err= 0: pid=79467: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=351, BW=87.9MiB/s (92.2MB/s)(897MiB/10200msec); 0 zone resets 00:14:11.791 slat (usec): min=23, max=42944, avg=2690.55, stdev=5216.14 00:14:11.791 clat (msec): min=10, max=394, avg=179.20, stdev=67.21 00:14:11.791 lat (msec): min=10, max=394, avg=181.90, stdev=68.06 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 34], 5.00th=[ 100], 10.00th=[ 106], 20.00th=[ 111], 00:14:11.791 | 30.00th=[ 115], 40.00th=[ 126], 50.00th=[ 218], 60.00th=[ 232], 00:14:11.791 | 70.00th=[ 234], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 253], 00:14:11.791 | 99.00th=[ 296], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 393], 00:14:11.791 | 99.99th=[ 393] 00:14:11.791 bw ( KiB/s): min=63488, max=163328, per=7.97%, avg=90200.45, stdev=35567.18, samples=20 00:14:11.791 iops : min= 248, max= 638, avg=352.30, stdev=138.87, samples=20 00:14:11.791 lat (msec) : 20=0.31%, 50=1.31%, 100=3.51%, 250=85.25%, 500=9.62% 00:14:11.791 cpu : usr=0.92%, sys=1.07%, ctx=3676, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,3587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job4: (groupid=0, jobs=1): err= 0: pid=79468: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=488, BW=122MiB/s (128MB/s)(1230MiB/10074msec); 0 zone resets 00:14:11.791 slat (usec): min=20, max=112317, avg=1982.04, stdev=4526.89 00:14:11.791 clat (msec): min=2, max=314, avg=129.07, stdev=81.09 00:14:11.791 lat (msec): min=3, max=314, avg=131.06, stdev=82.28 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 16], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 70], 00:14:11.791 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 77], 60.00th=[ 79], 00:14:11.791 | 70.00th=[ 228], 80.00th=[ 234], 90.00th=[ 249], 95.00th=[ 251], 00:14:11.791 | 99.00th=[ 255], 99.50th=[ 257], 99.90th=[ 300], 99.95th=[ 317], 00:14:11.791 | 99.99th=[ 317] 00:14:11.791 bw ( KiB/s): min=57458, max=247296, per=10.98%, avg=124300.85, stdev=76899.66, samples=20 00:14:11.791 iops : min= 224, max= 966, avg=485.50, stdev=300.43, samples=20 00:14:11.791 lat (msec) : 4=0.04%, 10=0.53%, 20=1.00%, 50=2.58%, 100=59.64% 00:14:11.791 lat (msec) : 250=29.95%, 500=6.26% 00:14:11.791 cpu : usr=1.11%, sys=1.65%, ctx=2914, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,4918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job5: (groupid=0, jobs=1): err= 0: pid=79469: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=450, BW=113MiB/s (118MB/s)(1142MiB/10136msec); 0 zone resets 00:14:11.791 slat (usec): min=21, max=80020, avg=2184.56, stdev=3923.77 00:14:11.791 clat (msec): min=16, max=290, avg=139.83, stdev=24.10 00:14:11.791 lat (msec): min=16, max=290, avg=142.02, stdev=24.14 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:14:11.791 | 30.00th=[ 138], 40.00th=[ 144], 50.00th=[ 146], 60.00th=[ 148], 00:14:11.791 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 161], 00:14:11.791 | 99.00th=[ 220], 99.50th=[ 230], 99.90th=[ 279], 99.95th=[ 279], 00:14:11.791 | 99.99th=[ 292] 00:14:11.791 bw ( KiB/s): min=90112, max=147456, per=10.18%, avg=115262.25, stdev=16607.30, samples=20 00:14:11.791 iops : min= 352, max= 576, avg=450.20, stdev=64.79, samples=20 00:14:11.791 lat (msec) : 20=0.09%, 50=0.35%, 100=0.44%, 250=98.82%, 500=0.31% 00:14:11.791 cpu : usr=1.01%, sys=1.46%, ctx=5346, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job6: (groupid=0, jobs=1): err= 0: pid=79470: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=268, BW=67.1MiB/s (70.3MB/s)(684MiB/10195msec); 0 zone resets 00:14:11.791 slat (usec): min=23, max=70867, avg=3652.52, stdev=6431.26 00:14:11.791 clat (msec): min=77, max=423, avg=234.69, stdev=23.49 00:14:11.791 lat (msec): min=77, max=423, avg=238.34, stdev=22.98 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 144], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 224], 00:14:11.791 | 30.00th=[ 228], 40.00th=[ 232], 50.00th=[ 234], 60.00th=[ 236], 00:14:11.791 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 257], 00:14:11.791 | 99.00th=[ 317], 99.50th=[ 380], 99.90th=[ 409], 99.95th=[ 422], 00:14:11.791 | 99.99th=[ 422] 00:14:11.791 bw ( KiB/s): min=65536, max=73580, per=6.04%, avg=68414.45, stdev=2748.26, samples=20 00:14:11.791 iops : min= 256, max= 287, avg=267.20, stdev=10.68, samples=20 00:14:11.791 lat (msec) : 100=0.44%, 250=84.83%, 500=14.73% 00:14:11.791 cpu : usr=0.55%, sys=0.86%, ctx=3336, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,2736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job7: (groupid=0, jobs=1): err= 0: pid=79471: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=489, BW=122MiB/s (128MB/s)(1240MiB/10134msec); 0 zone resets 00:14:11.791 slat (usec): min=22, max=12476, avg=1962.60, stdev=3519.30 00:14:11.791 clat (msec): min=9, max=287, avg=128.78, stdev=31.11 00:14:11.791 lat (msec): min=9, max=287, avg=130.74, stdev=31.42 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 54], 5.00th=[ 67], 10.00th=[ 73], 20.00th=[ 108], 00:14:11.791 | 30.00th=[ 113], 40.00th=[ 125], 50.00th=[ 144], 60.00th=[ 146], 00:14:11.791 | 70.00th=[ 148], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 159], 00:14:11.791 | 99.00th=[ 169], 99.50th=[ 228], 99.90th=[ 279], 99.95th=[ 279], 00:14:11.791 | 99.99th=[ 288] 00:14:11.791 bw ( KiB/s): min=100864, max=227840, per=11.07%, avg=125323.90, stdev=31147.87, samples=20 00:14:11.791 iops : min= 394, max= 890, avg=489.50, stdev=121.65, samples=20 00:14:11.791 lat (msec) : 10=0.08%, 20=0.16%, 50=0.56%, 100=13.05%, 250=85.86% 00:14:11.791 lat (msec) : 500=0.28% 00:14:11.791 cpu : usr=1.20%, sys=1.41%, ctx=5991, majf=0, minf=2 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,4959,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job8: (groupid=0, jobs=1): err= 0: pid=79472: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=570, BW=143MiB/s (150MB/s)(1442MiB/10105msec); 0 zone resets 00:14:11.791 slat (usec): min=20, max=45123, avg=1729.43, stdev=2977.56 00:14:11.791 clat (msec): min=49, max=209, avg=110.39, stdev= 7.55 00:14:11.791 lat (msec): min=49, max=209, avg=112.12, stdev= 7.06 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 105], 20.00th=[ 107], 00:14:11.791 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 112], 00:14:11.791 | 70.00th=[ 113], 80.00th=[ 114], 90.00th=[ 115], 95.00th=[ 116], 00:14:11.791 | 99.00th=[ 132], 99.50th=[ 161], 99.90th=[ 203], 99.95th=[ 203], 00:14:11.791 | 99.99th=[ 209] 00:14:11.791 bw ( KiB/s): min=131334, max=150528, per=12.90%, avg=146009.90, stdev=4124.47, samples=20 00:14:11.791 iops : min= 513, max= 588, avg=570.35, stdev=16.12, samples=20 00:14:11.791 lat (msec) : 50=0.07%, 100=0.69%, 250=99.24% 00:14:11.791 cpu : usr=1.05%, sys=1.72%, ctx=6184, majf=0, minf=1 00:14:11.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:11.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.791 issued rwts: total=0,5766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.791 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.791 job9: (groupid=0, jobs=1): err= 0: pid=79473: Fri Dec 6 04:17:23 2024 00:14:11.791 write: IOPS=574, BW=144MiB/s (151MB/s)(1452MiB/10107msec); 0 zone resets 00:14:11.791 slat (usec): min=19, max=9811, avg=1716.91, stdev=2908.54 00:14:11.791 clat (msec): min=8, max=212, avg=109.65, stdev=10.40 00:14:11.791 lat (msec): min=8, max=212, avg=111.36, stdev=10.15 00:14:11.791 clat percentiles (msec): 00:14:11.791 | 1.00th=[ 73], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 107], 00:14:11.791 | 30.00th=[ 109], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 112], 00:14:11.791 | 70.00th=[ 113], 80.00th=[ 113], 90.00th=[ 115], 95.00th=[ 116], 00:14:11.791 | 99.00th=[ 118], 99.50th=[ 163], 99.90th=[ 207], 99.95th=[ 207], 00:14:11.791 | 99.99th=[ 213] 00:14:11.791 bw ( KiB/s): min=140288, max=153600, per=12.98%, avg=147006.20, stdev=2752.98, samples=20 00:14:11.791 iops : min= 548, max= 600, avg=574.20, stdev=10.78, samples=20 00:14:11.791 lat (msec) : 10=0.07%, 20=0.14%, 50=0.48%, 100=1.12%, 250=98.19% 00:14:11.791 cpu : usr=1.24%, sys=1.80%, ctx=7579, majf=0, minf=1 00:14:11.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:11.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.792 issued rwts: total=0,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.792 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.792 job10: (groupid=0, jobs=1): err= 0: pid=79474: Fri Dec 6 04:17:23 2024 00:14:11.792 write: IOPS=456, BW=114MiB/s (120MB/s)(1158MiB/10140msec); 0 zone resets 00:14:11.792 slat (usec): min=21, max=43383, avg=2126.33, stdev=3765.37 00:14:11.792 clat (msec): min=21, max=293, avg=137.95, stdev=23.15 00:14:11.792 lat (msec): min=21, max=293, avg=140.08, stdev=23.22 00:14:11.792 clat percentiles (msec): 00:14:11.792 | 1.00th=[ 71], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 112], 00:14:11.792 | 30.00th=[ 133], 40.00th=[ 142], 50.00th=[ 146], 60.00th=[ 148], 00:14:11.792 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 159], 00:14:11.792 | 99.00th=[ 182], 99.50th=[ 232], 99.90th=[ 284], 99.95th=[ 284], 00:14:11.792 | 99.99th=[ 292] 00:14:11.792 bw ( KiB/s): min=100864, max=149504, per=10.33%, avg=116900.65, stdev=16590.53, samples=20 00:14:11.792 iops : min= 394, max= 584, avg=456.60, stdev=64.73, samples=20 00:14:11.792 lat (msec) : 50=0.41%, 100=1.27%, 250=97.93%, 500=0.39% 00:14:11.792 cpu : usr=0.95%, sys=1.20%, ctx=5928, majf=0, minf=1 00:14:11.792 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:14:11.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:11.792 issued rwts: total=0,4630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.792 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:11.792 00:14:11.792 Run status group 0 (all jobs): 00:14:11.792 WRITE: bw=1106MiB/s (1159MB/s), 66.3MiB/s-144MiB/s (69.5MB/s-151MB/s), io=11.0GiB (11.8GB), run=10074-10200msec 00:14:11.792 00:14:11.792 Disk stats (read/write): 00:14:11.792 nvme0n1: ios=50/5244, merge=0/0, ticks=64/1202397, in_queue=1202461, util=97.52% 00:14:11.792 nvme10n1: ios=49/5312, merge=0/0, ticks=57/1204614, in_queue=1204671, util=97.83% 00:14:11.792 nvme1n1: ios=47/5280, merge=0/0, ticks=55/1204738, in_queue=1204793, util=98.09% 00:14:11.792 nvme2n1: ios=34/7035, merge=0/0, ticks=46/1208248, in_queue=1208294, util=98.13% 00:14:11.792 nvme3n1: ios=30/9645, merge=0/0, ticks=39/1214936, in_queue=1214975, util=98.00% 00:14:11.792 nvme4n1: ios=0/8979, merge=0/0, ticks=0/1209300, in_queue=1209300, util=98.12% 00:14:11.792 nvme5n1: ios=0/5321, merge=0/0, ticks=0/1203992, in_queue=1203992, util=98.14% 00:14:11.792 nvme6n1: ios=0/9761, merge=0/0, ticks=0/1209956, in_queue=1209956, util=98.27% 00:14:11.792 nvme7n1: ios=0/11333, merge=0/0, ticks=0/1209049, in_queue=1209049, util=98.34% 00:14:11.792 nvme8n1: ios=0/11432, merge=0/0, ticks=0/1210388, in_queue=1210388, util=98.62% 00:14:11.792 nvme9n1: ios=0/9112, merge=0/0, ticks=0/1210518, in_queue=1210518, util=98.88% 00:14:11.792 04:17:23 -- target/multiconnection.sh@36 -- # sync 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # seq 1 11 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:14:11.792 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.792 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:14:11.792 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.792 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.792 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.792 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.792 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:14:11.792 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:14:11.792 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:14:11.792 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.792 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:14:11.792 04:17:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:14:11.793 04:17:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.793 04:17:23 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.793 04:17:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:14:11.793 04:17:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.793 04:17:23 -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 04:17:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.793 04:17:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.793 04:17:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:14:11.793 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:14:11.793 04:17:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:14:11.793 04:17:23 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.793 04:17:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.793 04:17:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:14:11.793 04:17:24 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.793 04:17:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:14:11.793 04:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.793 04:17:24 -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 04:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.793 04:17:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.793 04:17:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:14:11.793 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:14:11.793 04:17:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:14:11.793 04:17:24 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.793 04:17:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.793 04:17:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:14:11.793 04:17:24 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.793 04:17:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:14:11.793 04:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.793 04:17:24 -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 04:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.793 04:17:24 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:11.793 04:17:24 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:14:11.793 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:14:11.793 04:17:24 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:14:11.793 04:17:24 -- common/autotest_common.sh@1208 -- # local i=0 00:14:11.793 04:17:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:11.793 04:17:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:11.793 04:17:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:14:11.793 04:17:24 -- common/autotest_common.sh@1220 -- # return 0 00:14:11.793 04:17:24 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:14:11.793 04:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.793 04:17:24 -- common/autotest_common.sh@10 -- # set +x 00:14:11.793 04:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.793 04:17:24 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:14:11.793 04:17:24 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:11.793 04:17:24 -- target/multiconnection.sh@47 -- # nvmftestfini 00:14:11.793 04:17:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.793 04:17:24 -- nvmf/common.sh@116 -- # sync 00:14:11.793 04:17:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.793 04:17:24 -- nvmf/common.sh@119 -- # set +e 00:14:11.793 04:17:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.793 04:17:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.793 rmmod nvme_tcp 00:14:11.793 rmmod nvme_fabrics 00:14:11.793 rmmod nvme_keyring 00:14:11.793 04:17:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.793 04:17:24 -- nvmf/common.sh@123 -- # set -e 00:14:11.793 04:17:24 -- nvmf/common.sh@124 -- # return 0 00:14:11.793 04:17:24 -- nvmf/common.sh@477 -- # '[' -n 78779 ']' 00:14:11.793 04:17:24 -- nvmf/common.sh@478 -- # killprocess 78779 00:14:11.793 04:17:24 -- common/autotest_common.sh@936 -- # '[' -z 78779 ']' 00:14:11.793 04:17:24 -- common/autotest_common.sh@940 -- # kill -0 78779 00:14:11.793 04:17:24 -- common/autotest_common.sh@941 -- # uname 00:14:11.793 04:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.793 04:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78779 00:14:12.053 killing process with pid 78779 00:14:12.053 04:17:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:12.053 04:17:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:12.053 04:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78779' 00:14:12.053 04:17:24 -- common/autotest_common.sh@955 -- # kill 78779 00:14:12.053 04:17:24 -- common/autotest_common.sh@960 -- # wait 78779 00:14:12.621 04:17:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.621 04:17:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:12.621 04:17:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:12.621 04:17:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.621 04:17:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:12.621 04:17:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.621 04:17:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.621 04:17:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.621 04:17:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:12.621 00:14:12.621 real 0m49.785s 00:14:12.621 user 2m42.880s 00:14:12.621 sys 0m34.746s 00:14:12.621 04:17:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:12.621 04:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.621 ************************************ 00:14:12.621 END TEST nvmf_multiconnection 00:14:12.621 ************************************ 00:14:12.621 04:17:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:12.621 04:17:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.621 04:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.621 04:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:12.621 ************************************ 00:14:12.621 START TEST nvmf_initiator_timeout 00:14:12.622 ************************************ 00:14:12.622 04:17:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:12.880 * Looking for test storage... 00:14:12.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.880 04:17:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:12.880 04:17:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:12.880 04:17:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:12.880 04:17:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:12.880 04:17:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:12.880 04:17:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:12.880 04:17:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:12.880 04:17:25 -- scripts/common.sh@335 -- # IFS=.-: 00:14:12.880 04:17:25 -- scripts/common.sh@335 -- # read -ra ver1 00:14:12.880 04:17:25 -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.880 04:17:25 -- scripts/common.sh@336 -- # read -ra ver2 00:14:12.880 04:17:25 -- scripts/common.sh@337 -- # local 'op=<' 00:14:12.880 04:17:25 -- scripts/common.sh@339 -- # ver1_l=2 00:14:12.880 04:17:25 -- scripts/common.sh@340 -- # ver2_l=1 00:14:12.880 04:17:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:12.880 04:17:25 -- scripts/common.sh@343 -- # case "$op" in 00:14:12.880 04:17:25 -- scripts/common.sh@344 -- # : 1 00:14:12.880 04:17:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:12.880 04:17:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.880 04:17:25 -- scripts/common.sh@364 -- # decimal 1 00:14:12.880 04:17:25 -- scripts/common.sh@352 -- # local d=1 00:14:12.880 04:17:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.880 04:17:25 -- scripts/common.sh@354 -- # echo 1 00:14:12.880 04:17:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:12.880 04:17:25 -- scripts/common.sh@365 -- # decimal 2 00:14:12.880 04:17:25 -- scripts/common.sh@352 -- # local d=2 00:14:12.880 04:17:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.880 04:17:25 -- scripts/common.sh@354 -- # echo 2 00:14:12.880 04:17:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:12.880 04:17:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:12.880 04:17:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:12.880 04:17:25 -- scripts/common.sh@367 -- # return 0 00:14:12.880 04:17:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.880 04:17:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.880 --rc genhtml_branch_coverage=1 00:14:12.880 --rc genhtml_function_coverage=1 00:14:12.880 --rc genhtml_legend=1 00:14:12.880 --rc geninfo_all_blocks=1 00:14:12.880 --rc geninfo_unexecuted_blocks=1 00:14:12.880 00:14:12.880 ' 00:14:12.880 04:17:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.880 --rc genhtml_branch_coverage=1 00:14:12.880 --rc genhtml_function_coverage=1 00:14:12.880 --rc genhtml_legend=1 00:14:12.880 --rc geninfo_all_blocks=1 00:14:12.880 --rc geninfo_unexecuted_blocks=1 00:14:12.880 00:14:12.880 ' 00:14:12.880 04:17:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.880 --rc genhtml_branch_coverage=1 00:14:12.880 --rc genhtml_function_coverage=1 00:14:12.880 --rc genhtml_legend=1 00:14:12.880 --rc geninfo_all_blocks=1 00:14:12.880 --rc geninfo_unexecuted_blocks=1 00:14:12.880 00:14:12.880 ' 00:14:12.880 04:17:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:12.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.880 --rc genhtml_branch_coverage=1 00:14:12.881 --rc genhtml_function_coverage=1 00:14:12.881 --rc genhtml_legend=1 00:14:12.881 --rc geninfo_all_blocks=1 00:14:12.881 --rc geninfo_unexecuted_blocks=1 00:14:12.881 00:14:12.881 ' 00:14:12.881 04:17:25 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.881 04:17:25 -- nvmf/common.sh@7 -- # uname -s 00:14:12.881 04:17:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.881 04:17:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.881 04:17:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.881 04:17:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.881 04:17:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.881 04:17:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.881 04:17:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.881 04:17:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.881 04:17:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.881 04:17:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:14:12.881 04:17:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:14:12.881 04:17:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.881 04:17:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.881 04:17:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.881 04:17:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.881 04:17:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.881 04:17:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.881 04:17:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.881 04:17:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.881 04:17:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.881 04:17:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.881 04:17:25 -- paths/export.sh@5 -- # export PATH 00:14:12.881 04:17:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.881 04:17:25 -- nvmf/common.sh@46 -- # : 0 00:14:12.881 04:17:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:12.881 04:17:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:12.881 04:17:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:12.881 04:17:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.881 04:17:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.881 04:17:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:12.881 04:17:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:12.881 04:17:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:12.881 04:17:25 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.881 04:17:25 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.881 04:17:25 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:14:12.881 04:17:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:12.881 04:17:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.881 04:17:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:12.881 04:17:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:12.881 04:17:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:12.881 04:17:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.881 04:17:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.881 04:17:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.881 04:17:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:12.881 04:17:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:12.881 04:17:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.881 04:17:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.881 04:17:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:12.881 04:17:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:12.881 04:17:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.881 04:17:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.881 04:17:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.881 04:17:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.881 04:17:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.881 04:17:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.881 04:17:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.881 04:17:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.881 04:17:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:12.881 04:17:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:12.881 Cannot find device "nvmf_tgt_br" 00:14:12.881 04:17:25 -- nvmf/common.sh@154 -- # true 00:14:12.881 04:17:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.881 Cannot find device "nvmf_tgt_br2" 00:14:12.881 04:17:25 -- nvmf/common.sh@155 -- # true 00:14:12.881 04:17:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:12.881 04:17:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:12.881 Cannot find device "nvmf_tgt_br" 00:14:12.881 04:17:25 -- nvmf/common.sh@157 -- # true 00:14:12.881 04:17:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:12.881 Cannot find device "nvmf_tgt_br2" 00:14:12.881 04:17:25 -- nvmf/common.sh@158 -- # true 00:14:12.881 04:17:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:13.138 04:17:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:13.138 04:17:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.138 04:17:25 -- nvmf/common.sh@161 -- # true 00:14:13.138 04:17:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.138 04:17:25 -- nvmf/common.sh@162 -- # true 00:14:13.138 04:17:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.138 04:17:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.138 04:17:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.138 04:17:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.138 04:17:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.138 04:17:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.138 04:17:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.138 04:17:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.138 04:17:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.138 04:17:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:13.138 04:17:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:13.138 04:17:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:13.138 04:17:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:13.138 04:17:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.138 04:17:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.138 04:17:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.138 04:17:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:13.138 04:17:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:13.138 04:17:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.138 04:17:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.138 04:17:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.138 04:17:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.138 04:17:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.138 04:17:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:13.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:14:13.139 00:14:13.139 --- 10.0.0.2 ping statistics --- 00:14:13.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.139 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:14:13.139 04:17:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:13.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:13.139 00:14:13.139 --- 10.0.0.3 ping statistics --- 00:14:13.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.139 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:13.139 04:17:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:13.139 00:14:13.139 --- 10.0.0.1 ping statistics --- 00:14:13.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.139 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:13.139 04:17:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.139 04:17:25 -- nvmf/common.sh@421 -- # return 0 00:14:13.139 04:17:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:13.139 04:17:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.139 04:17:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:13.139 04:17:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:13.139 04:17:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.139 04:17:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:13.139 04:17:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:13.139 04:17:25 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:14:13.139 04:17:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:13.139 04:17:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.139 04:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 04:17:25 -- nvmf/common.sh@469 -- # nvmfpid=79852 00:14:13.397 04:17:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.397 04:17:25 -- nvmf/common.sh@470 -- # waitforlisten 79852 00:14:13.397 04:17:25 -- common/autotest_common.sh@829 -- # '[' -z 79852 ']' 00:14:13.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.397 04:17:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.397 04:17:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.397 04:17:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.397 04:17:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.397 04:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:13.397 [2024-12-06 04:17:25.751698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.397 [2024-12-06 04:17:25.751795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.397 [2024-12-06 04:17:25.896058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.655 [2024-12-06 04:17:25.980468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:13.655 [2024-12-06 04:17:25.980642] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.655 [2024-12-06 04:17:25.980659] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.655 [2024-12-06 04:17:25.980671] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.655 [2024-12-06 04:17:25.980836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.655 [2024-12-06 04:17:25.981184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.655 [2024-12-06 04:17:25.981684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.655 [2024-12-06 04:17:25.981695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.221 04:17:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.221 04:17:26 -- common/autotest_common.sh@862 -- # return 0 00:14:14.221 04:17:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:14.221 04:17:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.221 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 04:17:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.479 04:17:26 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:14.479 04:17:26 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:14.479 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.479 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 Malloc0 00:14:14.479 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.479 04:17:26 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:14:14.479 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.479 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 Delay0 00:14:14.479 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.479 04:17:26 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.479 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.480 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.480 [2024-12-06 04:17:26.858845] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.480 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.480 04:17:26 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:14.480 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.480 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.480 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.480 04:17:26 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.480 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.480 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.480 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.480 04:17:26 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.480 04:17:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.480 04:17:26 -- common/autotest_common.sh@10 -- # set +x 00:14:14.480 [2024-12-06 04:17:26.887034] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.480 04:17:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.480 04:17:26 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.480 04:17:27 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.480 04:17:27 -- common/autotest_common.sh@1187 -- # local i=0 00:14:14.480 04:17:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.480 04:17:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:14.480 04:17:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:17.037 04:17:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:17.037 04:17:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:17.037 04:17:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.037 04:17:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:17.037 04:17:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.037 04:17:29 -- common/autotest_common.sh@1197 -- # return 0 00:14:17.037 04:17:29 -- target/initiator_timeout.sh@35 -- # fio_pid=79916 00:14:17.037 04:17:29 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:14:17.037 04:17:29 -- target/initiator_timeout.sh@37 -- # sleep 3 00:14:17.037 [global] 00:14:17.037 thread=1 00:14:17.037 invalidate=1 00:14:17.037 rw=write 00:14:17.037 time_based=1 00:14:17.037 runtime=60 00:14:17.037 ioengine=libaio 00:14:17.037 direct=1 00:14:17.037 bs=4096 00:14:17.037 iodepth=1 00:14:17.037 norandommap=0 00:14:17.037 numjobs=1 00:14:17.037 00:14:17.037 verify_dump=1 00:14:17.037 verify_backlog=512 00:14:17.037 verify_state_save=0 00:14:17.037 do_verify=1 00:14:17.037 verify=crc32c-intel 00:14:17.037 [job0] 00:14:17.037 filename=/dev/nvme0n1 00:14:17.037 Could not set queue depth (nvme0n1) 00:14:17.037 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:17.037 fio-3.35 00:14:17.037 Starting 1 thread 00:14:19.606 04:17:32 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:14:19.606 04:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.606 04:17:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.606 true 00:14:19.606 04:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.606 04:17:32 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:14:19.606 04:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.606 04:17:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.606 true 00:14:19.606 04:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.606 04:17:32 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:14:19.606 04:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.606 04:17:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.606 true 00:14:19.606 04:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.606 04:17:32 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:14:19.606 04:17:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.606 04:17:32 -- common/autotest_common.sh@10 -- # set +x 00:14:19.606 true 00:14:19.606 04:17:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.606 04:17:32 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:22.889 04:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.889 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:22.889 true 00:14:22.889 04:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:22.889 04:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.889 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:22.889 true 00:14:22.889 04:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:22.889 04:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.889 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:22.889 true 00:14:22.889 04:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:22.889 04:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.889 04:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:22.889 true 00:14:22.889 04:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:22.889 04:17:35 -- target/initiator_timeout.sh@54 -- # wait 79916 00:15:19.114 00:15:19.114 job0: (groupid=0, jobs=1): err= 0: pid=79937: Fri Dec 6 04:18:29 2024 00:15:19.114 read: IOPS=733, BW=2935KiB/s (3006kB/s)(172MiB/60000msec) 00:15:19.114 slat (nsec): min=9551, max=89092, avg=13703.26, stdev=5083.60 00:15:19.114 clat (usec): min=147, max=2955, avg=221.48, stdev=40.20 00:15:19.114 lat (usec): min=158, max=2972, avg=235.18, stdev=40.96 00:15:19.114 clat percentiles (usec): 00:15:19.114 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:15:19.114 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 225], 00:15:19.114 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 285], 00:15:19.114 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 506], 99.95th=[ 611], 00:15:19.114 | 99.99th=[ 857] 00:15:19.114 write: IOPS=736, BW=2944KiB/s (3015kB/s)(173MiB/60000msec); 0 zone resets 00:15:19.114 slat (usec): min=12, max=15452, avg=20.72, stdev=82.88 00:15:19.114 clat (usec): min=116, max=40773k, avg=1100.38, stdev=194025.20 00:15:19.114 lat (usec): min=132, max=40773k, avg=1121.10, stdev=194025.20 00:15:19.114 clat percentiles (usec): 00:15:19.114 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 151], 00:15:19.114 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 180], 00:15:19.114 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 217], 95.00th=[ 235], 00:15:19.114 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 545], 99.95th=[ 660], 00:15:19.114 | 99.99th=[ 1942] 00:15:19.114 bw ( KiB/s): min= 64, max=11896, per=100.00%, avg=8821.72, stdev=1887.46, samples=39 00:15:19.114 iops : min= 16, max= 2974, avg=2205.41, stdev=471.87, samples=39 00:15:19.114 lat (usec) : 250=89.58%, 500=10.29%, 750=0.10%, 1000=0.02% 00:15:19.114 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:15:19.114 cpu : usr=0.52%, sys=1.96%, ctx=88199, majf=0, minf=5 00:15:19.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:19.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.114 issued rwts: total=44032,44160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:19.114 00:15:19.114 Run status group 0 (all jobs): 00:15:19.114 READ: bw=2935KiB/s (3006kB/s), 2935KiB/s-2935KiB/s (3006kB/s-3006kB/s), io=172MiB (180MB), run=60000-60000msec 00:15:19.114 WRITE: bw=2944KiB/s (3015kB/s), 2944KiB/s-2944KiB/s (3015kB/s-3015kB/s), io=173MiB (181MB), run=60000-60000msec 00:15:19.114 00:15:19.114 Disk stats (read/write): 00:15:19.114 nvme0n1: ios=43857/44032, merge=0/0, ticks=10030/8251, in_queue=18281, util=99.72% 00:15:19.114 04:18:29 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.114 04:18:29 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.114 04:18:29 -- common/autotest_common.sh@1208 -- # local i=0 00:15:19.114 04:18:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:19.114 04:18:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.114 04:18:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:19.114 04:18:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.114 nvmf hotplug test: fio successful as expected 00:15:19.114 04:18:29 -- common/autotest_common.sh@1220 -- # return 0 00:15:19.114 04:18:29 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:15:19.114 04:18:29 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:15:19.114 04:18:29 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.114 04:18:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.114 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:19.114 04:18:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.115 04:18:29 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:15:19.115 04:18:29 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:19.115 04:18:29 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:15:19.115 04:18:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.115 04:18:29 -- nvmf/common.sh@116 -- # sync 00:15:19.115 04:18:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.115 04:18:29 -- nvmf/common.sh@119 -- # set +e 00:15:19.115 04:18:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.115 04:18:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.115 rmmod nvme_tcp 00:15:19.115 rmmod nvme_fabrics 00:15:19.115 rmmod nvme_keyring 00:15:19.115 04:18:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.115 04:18:29 -- nvmf/common.sh@123 -- # set -e 00:15:19.115 04:18:29 -- nvmf/common.sh@124 -- # return 0 00:15:19.115 04:18:29 -- nvmf/common.sh@477 -- # '[' -n 79852 ']' 00:15:19.115 04:18:29 -- nvmf/common.sh@478 -- # killprocess 79852 00:15:19.115 04:18:29 -- common/autotest_common.sh@936 -- # '[' -z 79852 ']' 00:15:19.115 04:18:29 -- common/autotest_common.sh@940 -- # kill -0 79852 00:15:19.115 04:18:29 -- common/autotest_common.sh@941 -- # uname 00:15:19.115 04:18:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.115 04:18:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79852 00:15:19.115 04:18:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.115 04:18:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.115 04:18:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79852' 00:15:19.115 killing process with pid 79852 00:15:19.115 04:18:29 -- common/autotest_common.sh@955 -- # kill 79852 00:15:19.115 04:18:29 -- common/autotest_common.sh@960 -- # wait 79852 00:15:19.115 04:18:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.115 04:18:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.115 04:18:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.115 04:18:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.115 04:18:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.115 04:18:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.115 04:18:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.115 04:18:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.115 04:18:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:19.115 00:15:19.115 real 1m4.619s 00:15:19.115 user 3m58.357s 00:15:19.115 sys 0m16.553s 00:15:19.115 04:18:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:19.115 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:19.115 ************************************ 00:15:19.115 END TEST nvmf_initiator_timeout 00:15:19.115 ************************************ 00:15:19.115 04:18:29 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:15:19.115 04:18:29 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:19.115 04:18:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.115 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:19.115 04:18:29 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:19.115 04:18:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.115 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:19.115 04:18:29 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:19.115 04:18:29 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:19.115 04:18:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.115 04:18:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.115 04:18:29 -- common/autotest_common.sh@10 -- # set +x 00:15:19.115 ************************************ 00:15:19.115 START TEST nvmf_identify 00:15:19.115 ************************************ 00:15:19.115 04:18:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:19.115 * Looking for test storage... 00:15:19.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:19.115 04:18:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:19.115 04:18:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:19.115 04:18:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:19.115 04:18:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:19.115 04:18:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:19.115 04:18:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:19.115 04:18:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:19.115 04:18:30 -- scripts/common.sh@335 -- # IFS=.-: 00:15:19.115 04:18:30 -- scripts/common.sh@335 -- # read -ra ver1 00:15:19.115 04:18:30 -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.115 04:18:30 -- scripts/common.sh@336 -- # read -ra ver2 00:15:19.115 04:18:30 -- scripts/common.sh@337 -- # local 'op=<' 00:15:19.115 04:18:30 -- scripts/common.sh@339 -- # ver1_l=2 00:15:19.115 04:18:30 -- scripts/common.sh@340 -- # ver2_l=1 00:15:19.115 04:18:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:19.115 04:18:30 -- scripts/common.sh@343 -- # case "$op" in 00:15:19.115 04:18:30 -- scripts/common.sh@344 -- # : 1 00:15:19.115 04:18:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:19.115 04:18:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.115 04:18:30 -- scripts/common.sh@364 -- # decimal 1 00:15:19.115 04:18:30 -- scripts/common.sh@352 -- # local d=1 00:15:19.115 04:18:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.115 04:18:30 -- scripts/common.sh@354 -- # echo 1 00:15:19.115 04:18:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:19.115 04:18:30 -- scripts/common.sh@365 -- # decimal 2 00:15:19.115 04:18:30 -- scripts/common.sh@352 -- # local d=2 00:15:19.115 04:18:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.115 04:18:30 -- scripts/common.sh@354 -- # echo 2 00:15:19.115 04:18:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:19.115 04:18:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:19.115 04:18:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:19.115 04:18:30 -- scripts/common.sh@367 -- # return 0 00:15:19.115 04:18:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.115 04:18:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:19.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.115 --rc genhtml_branch_coverage=1 00:15:19.115 --rc genhtml_function_coverage=1 00:15:19.115 --rc genhtml_legend=1 00:15:19.115 --rc geninfo_all_blocks=1 00:15:19.115 --rc geninfo_unexecuted_blocks=1 00:15:19.115 00:15:19.115 ' 00:15:19.115 04:18:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:19.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.115 --rc genhtml_branch_coverage=1 00:15:19.115 --rc genhtml_function_coverage=1 00:15:19.115 --rc genhtml_legend=1 00:15:19.115 --rc geninfo_all_blocks=1 00:15:19.115 --rc geninfo_unexecuted_blocks=1 00:15:19.115 00:15:19.115 ' 00:15:19.115 04:18:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:19.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.115 --rc genhtml_branch_coverage=1 00:15:19.115 --rc genhtml_function_coverage=1 00:15:19.115 --rc genhtml_legend=1 00:15:19.115 --rc geninfo_all_blocks=1 00:15:19.115 --rc geninfo_unexecuted_blocks=1 00:15:19.115 00:15:19.115 ' 00:15:19.115 04:18:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:19.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.115 --rc genhtml_branch_coverage=1 00:15:19.115 --rc genhtml_function_coverage=1 00:15:19.115 --rc genhtml_legend=1 00:15:19.115 --rc geninfo_all_blocks=1 00:15:19.115 --rc geninfo_unexecuted_blocks=1 00:15:19.115 00:15:19.115 ' 00:15:19.115 04:18:30 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:19.115 04:18:30 -- nvmf/common.sh@7 -- # uname -s 00:15:19.115 04:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.115 04:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.116 04:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.116 04:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.116 04:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.116 04:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.116 04:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.116 04:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.116 04:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.116 04:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.116 04:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:15:19.116 04:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:15:19.116 04:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.116 04:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.116 04:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:19.116 04:18:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:19.116 04:18:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.116 04:18:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.116 04:18:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.116 04:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.116 04:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.116 04:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.116 04:18:30 -- paths/export.sh@5 -- # export PATH 00:15:19.116 04:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.116 04:18:30 -- nvmf/common.sh@46 -- # : 0 00:15:19.116 04:18:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:19.116 04:18:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:19.116 04:18:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:19.116 04:18:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.116 04:18:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.116 04:18:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:19.116 04:18:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:19.116 04:18:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:19.116 04:18:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:19.116 04:18:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:19.116 04:18:30 -- host/identify.sh@14 -- # nvmftestinit 00:15:19.116 04:18:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:19.116 04:18:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.116 04:18:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:19.116 04:18:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:19.116 04:18:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:19.116 04:18:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.116 04:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.116 04:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.116 04:18:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:19.116 04:18:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:19.116 04:18:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:19.116 04:18:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:19.116 04:18:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:19.116 04:18:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:19.116 04:18:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.116 04:18:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.116 04:18:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:19.116 04:18:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:19.116 04:18:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:19.116 04:18:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:19.116 04:18:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:19.116 04:18:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.116 04:18:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:19.116 04:18:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:19.116 04:18:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:19.116 04:18:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:19.116 04:18:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:19.116 04:18:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:19.116 Cannot find device "nvmf_tgt_br" 00:15:19.116 04:18:30 -- nvmf/common.sh@154 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:19.116 Cannot find device "nvmf_tgt_br2" 00:15:19.116 04:18:30 -- nvmf/common.sh@155 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:19.116 04:18:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:19.116 Cannot find device "nvmf_tgt_br" 00:15:19.116 04:18:30 -- nvmf/common.sh@157 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:19.116 Cannot find device "nvmf_tgt_br2" 00:15:19.116 04:18:30 -- nvmf/common.sh@158 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:19.116 04:18:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:19.116 04:18:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:19.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.116 04:18:30 -- nvmf/common.sh@161 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:19.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:19.116 04:18:30 -- nvmf/common.sh@162 -- # true 00:15:19.116 04:18:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:19.116 04:18:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:19.116 04:18:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:19.116 04:18:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:19.116 04:18:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:19.116 04:18:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:19.116 04:18:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:19.116 04:18:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:19.116 04:18:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:19.116 04:18:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:19.116 04:18:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:19.116 04:18:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:19.116 04:18:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:19.116 04:18:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.116 04:18:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.116 04:18:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.116 04:18:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:19.116 04:18:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:19.117 04:18:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.117 04:18:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.117 04:18:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.117 04:18:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.117 04:18:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.117 04:18:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:19.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:19.117 00:15:19.117 --- 10.0.0.2 ping statistics --- 00:15:19.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.117 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:19.117 04:18:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:19.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:19.117 00:15:19.117 --- 10.0.0.3 ping statistics --- 00:15:19.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.117 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:19.117 04:18:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:19.117 00:15:19.117 --- 10.0.0.1 ping statistics --- 00:15:19.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.117 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:19.117 04:18:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.117 04:18:30 -- nvmf/common.sh@421 -- # return 0 00:15:19.117 04:18:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:19.117 04:18:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.117 04:18:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:19.117 04:18:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:19.117 04:18:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.117 04:18:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:19.117 04:18:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:19.117 04:18:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:19.117 04:18:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.117 04:18:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 04:18:30 -- host/identify.sh@19 -- # nvmfpid=80784 00:15:19.117 04:18:30 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.117 04:18:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.117 04:18:30 -- host/identify.sh@23 -- # waitforlisten 80784 00:15:19.117 04:18:30 -- common/autotest_common.sh@829 -- # '[' -z 80784 ']' 00:15:19.117 04:18:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.117 04:18:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.117 04:18:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.117 04:18:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.117 04:18:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 [2024-12-06 04:18:30.451656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.117 [2024-12-06 04:18:30.451767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.117 [2024-12-06 04:18:30.592423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.117 [2024-12-06 04:18:30.684966] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:19.117 [2024-12-06 04:18:30.685160] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.117 [2024-12-06 04:18:30.685177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.117 [2024-12-06 04:18:30.685188] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.117 [2024-12-06 04:18:30.685343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.117 [2024-12-06 04:18:30.685659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.117 [2024-12-06 04:18:30.686059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.117 [2024-12-06 04:18:30.686092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.117 04:18:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.117 04:18:31 -- common/autotest_common.sh@862 -- # return 0 00:15:19.117 04:18:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 [2024-12-06 04:18:31.476115] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:19.117 04:18:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 04:18:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 Malloc0 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 [2024-12-06 04:18:31.585171] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.117 04:18:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:19.117 04:18:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.117 04:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:19.117 [2024-12-06 04:18:31.604927] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:19.117 [ 00:15:19.117 { 00:15:19.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.117 "subtype": "Discovery", 00:15:19.117 "listen_addresses": [ 00:15:19.117 { 00:15:19.117 "transport": "TCP", 00:15:19.117 "trtype": "TCP", 00:15:19.117 "adrfam": "IPv4", 00:15:19.117 "traddr": "10.0.0.2", 00:15:19.117 "trsvcid": "4420" 00:15:19.117 } 00:15:19.117 ], 00:15:19.117 "allow_any_host": true, 00:15:19.117 "hosts": [] 00:15:19.117 }, 00:15:19.117 { 00:15:19.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.117 "subtype": "NVMe", 00:15:19.117 "listen_addresses": [ 00:15:19.117 { 00:15:19.117 "transport": "TCP", 00:15:19.117 "trtype": "TCP", 00:15:19.117 "adrfam": "IPv4", 00:15:19.117 "traddr": "10.0.0.2", 00:15:19.117 "trsvcid": "4420" 00:15:19.117 } 00:15:19.117 ], 00:15:19.117 "allow_any_host": true, 00:15:19.117 "hosts": [], 00:15:19.118 "serial_number": "SPDK00000000000001", 00:15:19.118 "model_number": "SPDK bdev Controller", 00:15:19.118 "max_namespaces": 32, 00:15:19.118 "min_cntlid": 1, 00:15:19.118 "max_cntlid": 65519, 00:15:19.118 "namespaces": [ 00:15:19.118 { 00:15:19.118 "nsid": 1, 00:15:19.118 "bdev_name": "Malloc0", 00:15:19.118 "name": "Malloc0", 00:15:19.118 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:19.118 "eui64": "ABCDEF0123456789", 00:15:19.118 "uuid": "499ec7df-b737-4bd8-8eba-13ed63326b54" 00:15:19.118 } 00:15:19.118 ] 00:15:19.118 } 00:15:19.118 ] 00:15:19.118 04:18:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.118 04:18:31 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:19.118 [2024-12-06 04:18:31.642904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.118 [2024-12-06 04:18:31.642956] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80823 ] 00:15:19.379 [2024-12-06 04:18:31.782822] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:19.379 [2024-12-06 04:18:31.782887] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:19.379 [2024-12-06 04:18:31.782894] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:19.379 [2024-12-06 04:18:31.782906] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:19.379 [2024-12-06 04:18:31.782920] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:19.379 [2024-12-06 04:18:31.783055] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:19.379 [2024-12-06 04:18:31.783111] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11a7510 0 00:15:19.379 [2024-12-06 04:18:31.795460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:19.379 [2024-12-06 04:18:31.795494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:19.379 [2024-12-06 04:18:31.795500] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:19.379 [2024-12-06 04:18:31.795504] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:19.379 [2024-12-06 04:18:31.795552] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.795559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.795563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.795578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:19.379 [2024-12-06 04:18:31.795608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.803433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.803457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.803462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.803496] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:19.379 [2024-12-06 04:18:31.803504] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:19.379 [2024-12-06 04:18:31.803510] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:19.379 [2024-12-06 04:18:31.803528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.803548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.379 [2024-12-06 04:18:31.803578] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.803646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.803653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.803657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.803668] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:19.379 [2024-12-06 04:18:31.803676] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:19.379 [2024-12-06 04:18:31.803684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.803700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.379 [2024-12-06 04:18:31.803719] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.803768] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.803775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.803779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.803790] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:19.379 [2024-12-06 04:18:31.803799] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.803807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.803822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.379 [2024-12-06 04:18:31.803840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.803887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.803893] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.803897] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803901] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.803908] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.803919] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803923] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.803927] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.803935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.379 [2024-12-06 04:18:31.803951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.804001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.804008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.804012] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.804016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.804022] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:19.379 [2024-12-06 04:18:31.804027] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.804035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.804141] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:19.379 [2024-12-06 04:18:31.804146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.804156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.804160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.804164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.379 [2024-12-06 04:18:31.804172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.379 [2024-12-06 04:18:31.804189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.379 [2024-12-06 04:18:31.804236] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.379 [2024-12-06 04:18:31.804243] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.379 [2024-12-06 04:18:31.804247] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.379 [2024-12-06 04:18:31.804251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.379 [2024-12-06 04:18:31.804257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:19.379 [2024-12-06 04:18:31.804267] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.380 [2024-12-06 04:18:31.804300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.380 [2024-12-06 04:18:31.804349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.380 [2024-12-06 04:18:31.804355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.380 [2024-12-06 04:18:31.804359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.380 [2024-12-06 04:18:31.804369] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:19.380 [2024-12-06 04:18:31.804375] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804383] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:19.380 [2024-12-06 04:18:31.804398] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804423] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.380 [2024-12-06 04:18:31.804461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.380 [2024-12-06 04:18:31.804550] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.380 [2024-12-06 04:18:31.804557] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.380 [2024-12-06 04:18:31.804561] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804565] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a7510): datao=0, datal=4096, cccid=0 00:15:19.380 [2024-12-06 04:18:31.804570] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f38a0) on tqpair(0x11a7510): expected_datao=0, payload_size=4096 00:15:19.380 [2024-12-06 04:18:31.804580] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804585] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.380 [2024-12-06 04:18:31.804600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.380 [2024-12-06 04:18:31.804604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.380 [2024-12-06 04:18:31.804617] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:19.380 [2024-12-06 04:18:31.804623] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:19.380 [2024-12-06 04:18:31.804628] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:19.380 [2024-12-06 04:18:31.804634] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:19.380 [2024-12-06 04:18:31.804639] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:19.380 [2024-12-06 04:18:31.804644] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804658] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:19.380 [2024-12-06 04:18:31.804702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.380 [2024-12-06 04:18:31.804758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.380 [2024-12-06 04:18:31.804765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.380 [2024-12-06 04:18:31.804768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804773] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f38a0) on tqpair=0x11a7510 00:15:19.380 [2024-12-06 04:18:31.804782] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804786] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804790] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.380 [2024-12-06 04:18:31.804804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804812] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.380 [2024-12-06 04:18:31.804824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.380 [2024-12-06 04:18:31.804845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.380 [2024-12-06 04:18:31.804864] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804878] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:19.380 [2024-12-06 04:18:31.804886] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.804894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.804901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.380 [2024-12-06 04:18:31.804921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f38a0, cid 0, qid 0 00:15:19.380 [2024-12-06 04:18:31.804928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3a00, cid 1, qid 0 00:15:19.380 [2024-12-06 04:18:31.804933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3b60, cid 2, qid 0 00:15:19.380 [2024-12-06 04:18:31.804938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.380 [2024-12-06 04:18:31.804943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e20, cid 4, qid 0 00:15:19.380 [2024-12-06 04:18:31.805029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.380 [2024-12-06 04:18:31.805036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.380 [2024-12-06 04:18:31.805040] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3e20) on tqpair=0x11a7510 00:15:19.380 [2024-12-06 04:18:31.805051] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:19.380 [2024-12-06 04:18:31.805057] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:19.380 [2024-12-06 04:18:31.805068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.805084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.380 [2024-12-06 04:18:31.805103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e20, cid 4, qid 0 00:15:19.380 [2024-12-06 04:18:31.805169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.380 [2024-12-06 04:18:31.805177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.380 [2024-12-06 04:18:31.805181] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805185] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a7510): datao=0, datal=4096, cccid=4 00:15:19.380 [2024-12-06 04:18:31.805190] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f3e20) on tqpair(0x11a7510): expected_datao=0, payload_size=4096 00:15:19.380 [2024-12-06 04:18:31.805198] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805202] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.380 [2024-12-06 04:18:31.805217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.380 [2024-12-06 04:18:31.805220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805224] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3e20) on tqpair=0x11a7510 00:15:19.380 [2024-12-06 04:18:31.805238] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:19.380 [2024-12-06 04:18:31.805264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.380 [2024-12-06 04:18:31.805274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a7510) 00:15:19.380 [2024-12-06 04:18:31.805282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.380 [2024-12-06 04:18:31.805290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11a7510) 00:15:19.381 [2024-12-06 04:18:31.805304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.381 [2024-12-06 04:18:31.805329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e20, cid 4, qid 0 00:15:19.381 [2024-12-06 04:18:31.805336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3f80, cid 5, qid 0 00:15:19.381 [2024-12-06 04:18:31.805470] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.381 [2024-12-06 04:18:31.805479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.381 [2024-12-06 04:18:31.805483] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805487] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a7510): datao=0, datal=1024, cccid=4 00:15:19.381 [2024-12-06 04:18:31.805492] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f3e20) on tqpair(0x11a7510): expected_datao=0, payload_size=1024 00:15:19.381 [2024-12-06 04:18:31.805499] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805503] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.381 [2024-12-06 04:18:31.805516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.381 [2024-12-06 04:18:31.805519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3f80) on tqpair=0x11a7510 00:15:19.381 [2024-12-06 04:18:31.805543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.381 [2024-12-06 04:18:31.805551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.381 [2024-12-06 04:18:31.805555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3e20) on tqpair=0x11a7510 00:15:19.381 [2024-12-06 04:18:31.805571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a7510) 00:15:19.381 [2024-12-06 04:18:31.805587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.381 [2024-12-06 04:18:31.805611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e20, cid 4, qid 0 00:15:19.381 [2024-12-06 04:18:31.805681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.381 [2024-12-06 04:18:31.805688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.381 [2024-12-06 04:18:31.805692] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805696] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a7510): datao=0, datal=3072, cccid=4 00:15:19.381 [2024-12-06 04:18:31.805701] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f3e20) on tqpair(0x11a7510): expected_datao=0, payload_size=3072 00:15:19.381 [2024-12-06 04:18:31.805709] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805713] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.381 [2024-12-06 04:18:31.805727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.381 [2024-12-06 04:18:31.805731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3e20) on tqpair=0x11a7510 00:15:19.381 [2024-12-06 04:18:31.805746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11a7510) 00:15:19.381 [2024-12-06 04:18:31.805762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.381 [2024-12-06 04:18:31.805784] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e20, cid 4, qid 0 00:15:19.381 [2024-12-06 04:18:31.805851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.381 [2024-12-06 04:18:31.805858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.381 [2024-12-06 04:18:31.805862] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805866] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11a7510): datao=0, datal=8, cccid=4 00:15:19.381 [2024-12-06 04:18:31.805871] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f3e20) on tqpair(0x11a7510): expected_datao=0, payload_size=8 00:15:19.381 [2024-12-06 04:18:31.805878] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.381 [2024-12-06 04:18:31.805882] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.381 ===================================================== 00:15:19.381 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:19.381 ===================================================== 00:15:19.381 Controller Capabilities/Features 00:15:19.381 ================================ 00:15:19.381 Vendor ID: 0000 00:15:19.381 Subsystem Vendor ID: 0000 00:15:19.381 Serial Number: .................... 00:15:19.381 Model Number: ........................................ 00:15:19.381 Firmware Version: 24.01.1 00:15:19.381 Recommended Arb Burst: 0 00:15:19.381 IEEE OUI Identifier: 00 00 00 00:15:19.381 Multi-path I/O 00:15:19.381 May have multiple subsystem ports: No 00:15:19.381 May have multiple controllers: No 00:15:19.381 Associated with SR-IOV VF: No 00:15:19.381 Max Data Transfer Size: 131072 00:15:19.381 Max Number of Namespaces: 0 00:15:19.381 Max Number of I/O Queues: 1024 00:15:19.381 NVMe Specification Version (VS): 1.3 00:15:19.381 NVMe Specification Version (Identify): 1.3 00:15:19.381 Maximum Queue Entries: 128 00:15:19.381 Contiguous Queues Required: Yes 00:15:19.381 Arbitration Mechanisms Supported 00:15:19.381 Weighted Round Robin: Not Supported 00:15:19.381 Vendor Specific: Not Supported 00:15:19.381 Reset Timeout: 15000 ms 00:15:19.381 Doorbell Stride: 4 bytes 00:15:19.381 NVM Subsystem Reset: Not Supported 00:15:19.381 Command Sets Supported 00:15:19.381 NVM Command Set: Supported 00:15:19.381 Boot Partition: Not Supported 00:15:19.381 Memory Page Size Minimum: 4096 bytes 00:15:19.381 Memory Page Size Maximum: 4096 bytes 00:15:19.381 Persistent Memory Region: Not Supported 00:15:19.381 Optional Asynchronous Events Supported 00:15:19.381 Namespace Attribute Notices: Not Supported 00:15:19.381 Firmware Activation Notices: Not Supported 00:15:19.381 ANA Change Notices: Not Supported 00:15:19.381 PLE Aggregate Log Change Notices: Not Supported 00:15:19.381 LBA Status Info Alert Notices: Not Supported 00:15:19.381 EGE Aggregate Log Change Notices: Not Supported 00:15:19.381 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.381 Zone Descriptor Change Notices: Not Supported 00:15:19.381 Discovery Log Change Notices: Supported 00:15:19.381 Controller Attributes 00:15:19.381 128-bit Host Identifier: Not Supported 00:15:19.381 Non-Operational Permissive Mode: Not Supported 00:15:19.381 NVM Sets: Not Supported 00:15:19.381 Read Recovery Levels: Not Supported 00:15:19.381 Endurance Groups: Not Supported 00:15:19.381 Predictable Latency Mode: Not Supported 00:15:19.381 Traffic Based Keep ALive: Not Supported 00:15:19.381 Namespace Granularity: Not Supported 00:15:19.381 SQ Associations: Not Supported 00:15:19.381 UUID List: Not Supported 00:15:19.381 Multi-Domain Subsystem: Not Supported 00:15:19.381 Fixed Capacity Management: Not Supported 00:15:19.381 Variable Capacity Management: Not Supported 00:15:19.381 Delete Endurance Group: Not Supported 00:15:19.381 Delete NVM Set: Not Supported 00:15:19.381 Extended LBA Formats Supported: Not Supported 00:15:19.381 Flexible Data Placement Supported: Not Supported 00:15:19.381 00:15:19.381 Controller Memory Buffer Support 00:15:19.381 ================================ 00:15:19.381 Supported: No 00:15:19.381 00:15:19.381 Persistent Memory Region Support 00:15:19.381 ================================ 00:15:19.381 Supported: No 00:15:19.381 00:15:19.381 Admin Command Set Attributes 00:15:19.381 ============================ 00:15:19.381 Security Send/Receive: Not Supported 00:15:19.381 Format NVM: Not Supported 00:15:19.381 Firmware Activate/Download: Not Supported 00:15:19.381 Namespace Management: Not Supported 00:15:19.381 Device Self-Test: Not Supported 00:15:19.381 Directives: Not Supported 00:15:19.381 NVMe-MI: Not Supported 00:15:19.381 Virtualization Management: Not Supported 00:15:19.381 Doorbell Buffer Config: Not Supported 00:15:19.381 Get LBA Status Capability: Not Supported 00:15:19.381 Command & Feature Lockdown Capability: Not Supported 00:15:19.381 Abort Command Limit: 1 00:15:19.381 Async Event Request Limit: 4 00:15:19.381 Number of Firmware Slots: N/A 00:15:19.381 Firmware Slot 1 Read-Only: N/A 00:15:19.381 Firmware Activation Without Reset: N/A 00:15:19.381 Multiple Update Detection Support: N/A 00:15:19.381 Firmware Update Granularity: No Information Provided 00:15:19.381 Per-Namespace SMART Log: No 00:15:19.381 Asymmetric Namespace Access Log Page: Not Supported 00:15:19.381 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:19.381 Command Effects Log Page: Not Supported 00:15:19.381 Get Log Page Extended Data: Supported 00:15:19.381 Telemetry Log Pages: Not Supported 00:15:19.381 Persistent Event Log Pages: Not Supported 00:15:19.381 Supported Log Pages Log Page: May Support 00:15:19.381 Commands Supported & Effects Log Page: Not Supported 00:15:19.381 Feature Identifiers & Effects Log Page:May Support 00:15:19.381 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.382 Data Area 4 for Telemetry Log: Not Supported 00:15:19.382 Error Log Page Entries Supported: 128 00:15:19.382 Keep Alive: Not Supported 00:15:19.382 00:15:19.382 NVM Command Set Attributes 00:15:19.382 ========================== 00:15:19.382 Submission Queue Entry Size 00:15:19.382 Max: 1 00:15:19.382 Min: 1 00:15:19.382 Completion Queue Entry Size 00:15:19.382 Max: 1 00:15:19.382 Min: 1 00:15:19.382 Number of Namespaces: 0 00:15:19.382 Compare Command: Not Supported 00:15:19.382 Write Uncorrectable Command: Not Supported 00:15:19.382 Dataset Management Command: Not Supported 00:15:19.382 Write Zeroes Command: Not Supported 00:15:19.382 Set Features Save Field: Not Supported 00:15:19.382 Reservations: Not Supported 00:15:19.382 Timestamp: Not Supported 00:15:19.382 Copy: Not Supported 00:15:19.382 Volatile Write Cache: Not Present 00:15:19.382 Atomic Write Unit (Normal): 1 00:15:19.382 Atomic Write Unit (PFail): 1 00:15:19.382 Atomic Compare & Write Unit: 1 00:15:19.382 Fused Compare & Write: Supported 00:15:19.382 Scatter-Gather List 00:15:19.382 SGL Command Set: Supported 00:15:19.382 SGL Keyed: Supported 00:15:19.382 SGL Bit Bucket Descriptor: Not Supported 00:15:19.382 SGL Metadata Pointer: Not Supported 00:15:19.382 Oversized SGL: Not Supported 00:15:19.382 SGL Metadata Address: Not Supported 00:15:19.382 SGL Offset: Supported 00:15:19.382 Transport SGL Data Block: Not Supported 00:15:19.382 Replay Protected Memory Block: Not Supported 00:15:19.382 00:15:19.382 Firmware Slot Information 00:15:19.382 ========================= 00:15:19.382 Active slot: 0 00:15:19.382 00:15:19.382 00:15:19.382 Error Log 00:15:19.382 ========= 00:15:19.382 00:15:19.382 Active Namespaces 00:15:19.382 ================= 00:15:19.382 Discovery Log Page 00:15:19.382 ================== 00:15:19.382 Generation Counter: 2 00:15:19.382 Number of Records: 2 00:15:19.382 Record Format: 0 00:15:19.382 00:15:19.382 Discovery Log Entry 0 00:15:19.382 ---------------------- 00:15:19.382 Transport Type: 3 (TCP) 00:15:19.382 Address Family: 1 (IPv4) 00:15:19.382 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:19.382 Entry Flags: 00:15:19.382 Duplicate Returned Information: 1 00:15:19.382 Explicit Persistent Connection Support for Discovery: 1 00:15:19.382 Transport Requirements: 00:15:19.382 Secure Channel: Not Required 00:15:19.382 Port ID: 0 (0x0000) 00:15:19.382 Controller ID: 65535 (0xffff) 00:15:19.382 Admin Max SQ Size: 128 00:15:19.382 Transport Service Identifier: 4420 00:15:19.382 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:19.382 Transport Address: 10.0.0.2 00:15:19.382 Discovery Log Entry 1 00:15:19.382 ---------------------- 00:15:19.382 Transport Type: 3 (TCP) 00:15:19.382 Address Family: 1 (IPv4) 00:15:19.382 Subsystem Type: 2 (NVM Subsystem) 00:15:19.382 Entry Flags: 00:15:19.382 Duplicate Returned Information: 0 00:15:19.382 Explicit Persistent Connection Support for Discovery: 0 00:15:19.382 Transport Requirements: 00:15:19.382 Secure Channel: Not Required 00:15:19.382 Port ID: 0 (0x0000) 00:15:19.382 Controller ID: 65535 (0xffff) 00:15:19.382 Admin Max SQ Size: 128 00:15:19.382 Transport Service Identifier: 4420 00:15:19.382 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:19.382 Transport Address: 10.0.0.2 [2024-12-06 04:18:31.805897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.805904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.805908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.805912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3e20) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806008] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:19.382 [2024-12-06 04:18:31.806023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.382 [2024-12-06 04:18:31.806030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.382 [2024-12-06 04:18:31.806037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.382 [2024-12-06 04:18:31.806043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.382 [2024-12-06 04:18:31.806053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806057] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.382 [2024-12-06 04:18:31.806147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.806154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.806158] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806175] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806179] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806208] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.382 [2024-12-06 04:18:31.806276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.806283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.806287] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806297] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:19.382 [2024-12-06 04:18:31.806303] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:19.382 [2024-12-06 04:18:31.806313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.382 [2024-12-06 04:18:31.806425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.806434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.806438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806442] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.382 [2024-12-06 04:18:31.806539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.806546] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.806550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806565] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806570] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806574] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.382 [2024-12-06 04:18:31.806642] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.382 [2024-12-06 04:18:31.806649] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.382 [2024-12-06 04:18:31.806653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.382 [2024-12-06 04:18:31.806668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.382 [2024-12-06 04:18:31.806677] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.382 [2024-12-06 04:18:31.806684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.382 [2024-12-06 04:18:31.806700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.806753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.806760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.806764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.806780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.806795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.806811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.806861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.806868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.806872] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.806887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.806903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.806919] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.806966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.806973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.806976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.806992] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.806996] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.807007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.807024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.807077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.807084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.807088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.807103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.807119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.807135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.807183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.807189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.807193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.807208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.807224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.807240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.807287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.807294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.807298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.807313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.807322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.807329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.807345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.811407] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.811429] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.811434] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.811439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.811455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.811460] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.811464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11a7510) 00:15:19.383 [2024-12-06 04:18:31.811473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.383 [2024-12-06 04:18:31.811501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3cc0, cid 3, qid 0 00:15:19.383 [2024-12-06 04:18:31.811557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.383 [2024-12-06 04:18:31.811564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.383 [2024-12-06 04:18:31.811568] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.383 [2024-12-06 04:18:31.811572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f3cc0) on tqpair=0x11a7510 00:15:19.383 [2024-12-06 04:18:31.811581] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:19.383 00:15:19.383 04:18:31 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:19.383 [2024-12-06 04:18:31.848382] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.383 [2024-12-06 04:18:31.848453] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80832 ] 00:15:19.645 [2024-12-06 04:18:31.985102] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:19.645 [2024-12-06 04:18:31.985180] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:19.645 [2024-12-06 04:18:31.985187] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:19.645 [2024-12-06 04:18:31.985198] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:19.645 [2024-12-06 04:18:31.985211] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:19.645 [2024-12-06 04:18:31.985343] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:19.645 [2024-12-06 04:18:31.985396] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2466510 0 00:15:19.645 [2024-12-06 04:18:31.990430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:19.645 [2024-12-06 04:18:31.990452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:19.645 [2024-12-06 04:18:31.990474] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:19.645 [2024-12-06 04:18:31.990478] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:19.645 [2024-12-06 04:18:31.990524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.990531] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.990536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.645 [2024-12-06 04:18:31.990550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:19.645 [2024-12-06 04:18:31.990580] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.645 [2024-12-06 04:18:31.998425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.645 [2024-12-06 04:18:31.998446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.645 [2024-12-06 04:18:31.998451] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.645 [2024-12-06 04:18:31.998486] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:19.645 [2024-12-06 04:18:31.998494] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:19.645 [2024-12-06 04:18:31.998500] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:19.645 [2024-12-06 04:18:31.998516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.645 [2024-12-06 04:18:31.998535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.645 [2024-12-06 04:18:31.998562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.645 [2024-12-06 04:18:31.998618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.645 [2024-12-06 04:18:31.998626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.645 [2024-12-06 04:18:31.998630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.645 [2024-12-06 04:18:31.998641] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:19.645 [2024-12-06 04:18:31.998649] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:19.645 [2024-12-06 04:18:31.998657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.645 [2024-12-06 04:18:31.998665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.645 [2024-12-06 04:18:31.998673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.998691] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.998745] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.998752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.998756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.998767] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:19.646 [2024-12-06 04:18:31.998776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.998784] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998792] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.998799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.998817] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.998865] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.998872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.998876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.998887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.998897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.998913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.998930] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.998978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.998985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.998989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.998993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.998999] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:19.646 [2024-12-06 04:18:31.999004] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.999012] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.999118] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:19.646 [2024-12-06 04:18:31.999122] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.999131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.999164] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.999215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.999222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.999226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.999237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:19.646 [2024-12-06 04:18:31.999247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.999280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.999325] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.999332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.999335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.999345] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:19.646 [2024-12-06 04:18:31.999351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:19.646 [2024-12-06 04:18:31.999359] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:19.646 [2024-12-06 04:18:31.999375] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:19.646 [2024-12-06 04:18:31.999384] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.646 [2024-12-06 04:18:31.999436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.999528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.646 [2024-12-06 04:18:31.999535] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.646 [2024-12-06 04:18:31.999539] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999543] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=4096, cccid=0 00:15:19.646 [2024-12-06 04:18:31.999548] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b28a0) on tqpair(0x2466510): expected_datao=0, payload_size=4096 00:15:19.646 [2024-12-06 04:18:31.999557] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999562] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.999577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.999581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.999594] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:19.646 [2024-12-06 04:18:31.999600] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:19.646 [2024-12-06 04:18:31.999604] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:19.646 [2024-12-06 04:18:31.999609] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:19.646 [2024-12-06 04:18:31.999614] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:19.646 [2024-12-06 04:18:31.999619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:19.646 [2024-12-06 04:18:31.999634] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:19.646 [2024-12-06 04:18:31.999642] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999647] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999650] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:19.646 [2024-12-06 04:18:31.999678] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.646 [2024-12-06 04:18:31.999730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.646 [2024-12-06 04:18:31.999736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.646 [2024-12-06 04:18:31.999740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999744] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b28a0) on tqpair=0x2466510 00:15:19.646 [2024-12-06 04:18:31.999753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.646 [2024-12-06 04:18:31.999774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.646 [2024-12-06 04:18:31.999795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999798] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.646 [2024-12-06 04:18:31.999802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2466510) 00:15:19.646 [2024-12-06 04:18:31.999808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.646 [2024-12-06 04:18:31.999814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:31.999818] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:31.999822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:31.999828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.647 [2024-12-06 04:18:31.999833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:31.999846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:31.999854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:31.999858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:31.999862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:31.999869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.647 [2024-12-06 04:18:31.999889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b28a0, cid 0, qid 0 00:15:19.647 [2024-12-06 04:18:31.999896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2a00, cid 1, qid 0 00:15:19.647 [2024-12-06 04:18:31.999901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2b60, cid 2, qid 0 00:15:19.647 [2024-12-06 04:18:31.999906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2cc0, cid 3, qid 0 00:15:19.647 [2024-12-06 04:18:31.999911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.647 [2024-12-06 04:18:32.000000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.647 [2024-12-06 04:18:32.000007] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.647 [2024-12-06 04:18:32.000011] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.647 [2024-12-06 04:18:32.000021] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:19.647 [2024-12-06 04:18:32.000027] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000036] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000054] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:32.000070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:19.647 [2024-12-06 04:18:32.000088] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.647 [2024-12-06 04:18:32.000143] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.647 [2024-12-06 04:18:32.000150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.647 [2024-12-06 04:18:32.000154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.647 [2024-12-06 04:18:32.000220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:32.000254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.647 [2024-12-06 04:18:32.000272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.647 [2024-12-06 04:18:32.000331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.647 [2024-12-06 04:18:32.000338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.647 [2024-12-06 04:18:32.000342] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000346] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=4096, cccid=4 00:15:19.647 [2024-12-06 04:18:32.000351] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b2e20) on tqpair(0x2466510): expected_datao=0, payload_size=4096 00:15:19.647 [2024-12-06 04:18:32.000359] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000363] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000371] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.647 [2024-12-06 04:18:32.000377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.647 [2024-12-06 04:18:32.000381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000397] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.647 [2024-12-06 04:18:32.000415] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:19.647 [2024-12-06 04:18:32.000426] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000437] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000445] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000449] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:32.000461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.647 [2024-12-06 04:18:32.000481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.647 [2024-12-06 04:18:32.000559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.647 [2024-12-06 04:18:32.000566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.647 [2024-12-06 04:18:32.000570] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000573] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=4096, cccid=4 00:15:19.647 [2024-12-06 04:18:32.000578] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b2e20) on tqpair(0x2466510): expected_datao=0, payload_size=4096 00:15:19.647 [2024-12-06 04:18:32.000586] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000590] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.647 [2024-12-06 04:18:32.000605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.647 [2024-12-06 04:18:32.000609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.647 [2024-12-06 04:18:32.000630] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000649] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.647 [2024-12-06 04:18:32.000664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.647 [2024-12-06 04:18:32.000683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.647 [2024-12-06 04:18:32.000739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.647 [2024-12-06 04:18:32.000746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.647 [2024-12-06 04:18:32.000750] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000754] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=4096, cccid=4 00:15:19.647 [2024-12-06 04:18:32.000759] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b2e20) on tqpair(0x2466510): expected_datao=0, payload_size=4096 00:15:19.647 [2024-12-06 04:18:32.000767] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000771] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.647 [2024-12-06 04:18:32.000785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.647 [2024-12-06 04:18:32.000789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.647 [2024-12-06 04:18:32.000803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000811] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000840] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:19.647 [2024-12-06 04:18:32.000845] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:19.647 [2024-12-06 04:18:32.000851] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:19.647 [2024-12-06 04:18:32.000867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.647 [2024-12-06 04:18:32.000875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.000883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.000890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.000894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.000898] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.000904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:19.648 [2024-12-06 04:18:32.000928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.648 [2024-12-06 04:18:32.000935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2f80, cid 5, qid 0 00:15:19.648 [2024-12-06 04:18:32.001002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001017] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.648 [2024-12-06 04:18:32.001025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2f80) on tqpair=0x2466510 00:15:19.648 [2024-12-06 04:18:32.001050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2f80, cid 5, qid 0 00:15:19.648 [2024-12-06 04:18:32.001133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2f80) on tqpair=0x2466510 00:15:19.648 [2024-12-06 04:18:32.001159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2f80, cid 5, qid 0 00:15:19.648 [2024-12-06 04:18:32.001253] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001259] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001267] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2f80) on tqpair=0x2466510 00:15:19.648 [2024-12-06 04:18:32.001279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001283] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2f80, cid 5, qid 0 00:15:19.648 [2024-12-06 04:18:32.001360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2f80) on tqpair=0x2466510 00:15:19.648 [2024-12-06 04:18:32.001403] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001437] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2466510) 00:15:19.648 [2024-12-06 04:18:32.001486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.648 [2024-12-06 04:18:32.001507] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2f80, cid 5, qid 0 00:15:19.648 [2024-12-06 04:18:32.001515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2e20, cid 4, qid 0 00:15:19.648 [2024-12-06 04:18:32.001520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b30e0, cid 6, qid 0 00:15:19.648 [2024-12-06 04:18:32.001525] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b3240, cid 7, qid 0 00:15:19.648 [2024-12-06 04:18:32.001662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.648 [2024-12-06 04:18:32.001669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.648 [2024-12-06 04:18:32.001673] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001676] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=8192, cccid=5 00:15:19.648 [2024-12-06 04:18:32.001681] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b2f80) on tqpair(0x2466510): expected_datao=0, payload_size=8192 00:15:19.648 [2024-12-06 04:18:32.001700] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001705] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.648 [2024-12-06 04:18:32.001717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.648 [2024-12-06 04:18:32.001721] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001725] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=512, cccid=4 00:15:19.648 [2024-12-06 04:18:32.001730] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b2e20) on tqpair(0x2466510): expected_datao=0, payload_size=512 00:15:19.648 [2024-12-06 04:18:32.001737] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001741] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001747] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.648 [2024-12-06 04:18:32.001753] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.648 [2024-12-06 04:18:32.001756] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001760] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=512, cccid=6 00:15:19.648 [2024-12-06 04:18:32.001765] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b30e0) on tqpair(0x2466510): expected_datao=0, payload_size=512 00:15:19.648 [2024-12-06 04:18:32.001772] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001776] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001781] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:19.648 [2024-12-06 04:18:32.001787] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:19.648 [2024-12-06 04:18:32.001791] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001795] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2466510): datao=0, datal=4096, cccid=7 00:15:19.648 [2024-12-06 04:18:32.001800] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b3240) on tqpair(0x2466510): expected_datao=0, payload_size=4096 00:15:19.648 [2024-12-06 04:18:32.001807] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001811] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.648 [2024-12-06 04:18:32.001825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.648 [2024-12-06 04:18:32.001829] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.648 [2024-12-06 04:18:32.001833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2f80) on tqpair=0x2466510 00:15:19.648 ===================================================== 00:15:19.648 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:19.648 ===================================================== 00:15:19.648 Controller Capabilities/Features 00:15:19.648 ================================ 00:15:19.648 Vendor ID: 8086 00:15:19.648 Subsystem Vendor ID: 8086 00:15:19.648 Serial Number: SPDK00000000000001 00:15:19.648 Model Number: SPDK bdev Controller 00:15:19.648 Firmware Version: 24.01.1 00:15:19.648 Recommended Arb Burst: 6 00:15:19.648 IEEE OUI Identifier: e4 d2 5c 00:15:19.648 Multi-path I/O 00:15:19.648 May have multiple subsystem ports: Yes 00:15:19.648 May have multiple controllers: Yes 00:15:19.648 Associated with SR-IOV VF: No 00:15:19.648 Max Data Transfer Size: 131072 00:15:19.649 Max Number of Namespaces: 32 00:15:19.649 Max Number of I/O Queues: 127 00:15:19.649 NVMe Specification Version (VS): 1.3 00:15:19.649 NVMe Specification Version (Identify): 1.3 00:15:19.649 Maximum Queue Entries: 128 00:15:19.649 Contiguous Queues Required: Yes 00:15:19.649 Arbitration Mechanisms Supported 00:15:19.649 Weighted Round Robin: Not Supported 00:15:19.649 Vendor Specific: Not Supported 00:15:19.649 Reset Timeout: 15000 ms 00:15:19.649 Doorbell Stride: 4 bytes 00:15:19.649 NVM Subsystem Reset: Not Supported 00:15:19.649 Command Sets Supported 00:15:19.649 NVM Command Set: Supported 00:15:19.649 Boot Partition: Not Supported 00:15:19.649 Memory Page Size Minimum: 4096 bytes 00:15:19.649 Memory Page Size Maximum: 4096 bytes 00:15:19.649 Persistent Memory Region: Not Supported 00:15:19.649 Optional Asynchronous Events Supported 00:15:19.649 Namespace Attribute Notices: Supported 00:15:19.649 Firmware Activation Notices: Not Supported 00:15:19.649 ANA Change Notices: Not Supported 00:15:19.649 PLE Aggregate Log Change Notices: Not Supported 00:15:19.649 LBA Status Info Alert Notices: Not Supported 00:15:19.649 EGE Aggregate Log Change Notices: Not Supported 00:15:19.649 Normal NVM Subsystem Shutdown event: Not Supported 00:15:19.649 Zone Descriptor Change Notices: Not Supported 00:15:19.649 Discovery Log Change Notices: Not Supported 00:15:19.649 Controller Attributes 00:15:19.649 128-bit Host Identifier: Supported 00:15:19.649 Non-Operational Permissive Mode: Not Supported 00:15:19.649 NVM Sets: Not Supported 00:15:19.649 Read Recovery Levels: Not Supported 00:15:19.649 Endurance Groups: Not Supported 00:15:19.649 Predictable Latency Mode: Not Supported 00:15:19.649 Traffic Based Keep ALive: Not Supported 00:15:19.649 Namespace Granularity: Not Supported 00:15:19.649 SQ Associations: Not Supported 00:15:19.649 UUID List: Not Supported 00:15:19.649 Multi-Domain Subsystem: Not Supported 00:15:19.649 Fixed Capacity Management: Not Supported 00:15:19.649 Variable Capacity Management: Not Supported 00:15:19.649 Delete Endurance Group: Not Supported 00:15:19.649 Delete NVM Set: Not Supported 00:15:19.649 Extended LBA Formats Supported: Not Supported 00:15:19.649 Flexible Data Placement Supported: Not Supported 00:15:19.649 00:15:19.649 Controller Memory Buffer Support 00:15:19.649 ================================ 00:15:19.649 Supported: No 00:15:19.649 00:15:19.649 Persistent Memory Region Support 00:15:19.649 ================================ 00:15:19.649 Supported: No 00:15:19.649 00:15:19.649 Admin Command Set Attributes 00:15:19.649 ============================ 00:15:19.649 Security Send/Receive: Not Supported 00:15:19.649 Format NVM: Not Supported 00:15:19.649 Firmware Activate/Download: Not Supported 00:15:19.649 Namespace Management: Not Supported 00:15:19.649 Device Self-Test: Not Supported 00:15:19.649 Directives: Not Supported 00:15:19.649 NVMe-MI: Not Supported 00:15:19.649 Virtualization Management: Not Supported 00:15:19.649 Doorbell Buffer Config: Not Supported 00:15:19.649 Get LBA Status Capability: Not Supported 00:15:19.649 Command & Feature Lockdown Capability: Not Supported 00:15:19.649 Abort Command Limit: 4 00:15:19.649 Async Event Request Limit: 4 00:15:19.649 Number of Firmware Slots: N/A 00:15:19.649 Firmware Slot 1 Read-Only: N/A 00:15:19.649 Firmware Activation Without Reset: N/A 00:15:19.649 Multiple Update Detection Support: N/A 00:15:19.649 Firmware Update Granularity: No Information Provided 00:15:19.649 Per-Namespace SMART Log: No 00:15:19.649 Asymmetric Namespace Access Log Page: Not Supported 00:15:19.649 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:19.649 Command Effects Log Page: Supported 00:15:19.649 Get Log Page Extended Data: Supported 00:15:19.649 Telemetry Log Pages: Not Supported 00:15:19.649 Persistent Event Log Pages: Not Supported 00:15:19.649 Supported Log Pages Log Page: May Support 00:15:19.649 Commands Supported & Effects Log Page: Not Supported 00:15:19.649 Feature Identifiers & Effects Log Page:May Support 00:15:19.649 NVMe-MI Commands & Effects Log Page: May Support 00:15:19.649 Data Area 4 for Telemetry Log: Not Supported 00:15:19.649 Error Log Page Entries Supported: 128 00:15:19.649 Keep Alive: Supported 00:15:19.649 Keep Alive Granularity: 10000 ms 00:15:19.649 00:15:19.649 NVM Command Set Attributes 00:15:19.649 ========================== 00:15:19.649 Submission Queue Entry Size 00:15:19.649 Max: 64 00:15:19.649 Min: 64 00:15:19.649 Completion Queue Entry Size 00:15:19.649 Max: 16 00:15:19.649 Min: 16 00:15:19.649 Number of Namespaces: 32 00:15:19.649 Compare Command: Supported 00:15:19.649 Write Uncorrectable Command: Not Supported 00:15:19.649 Dataset Management Command: Supported 00:15:19.649 Write Zeroes Command: Supported 00:15:19.649 Set Features Save Field: Not Supported 00:15:19.649 Reservations: Supported 00:15:19.649 Timestamp: Not Supported 00:15:19.649 Copy: Supported 00:15:19.649 Volatile Write Cache: Present 00:15:19.649 Atomic Write Unit (Normal): 1 00:15:19.649 Atomic Write Unit (PFail): 1 00:15:19.649 Atomic Compare & Write Unit: 1 00:15:19.649 Fused Compare & Write: Supported 00:15:19.649 Scatter-Gather List 00:15:19.649 SGL Command Set: Supported 00:15:19.649 SGL Keyed: Supported 00:15:19.649 SGL Bit Bucket Descriptor: Not Supported 00:15:19.649 SGL Metadata Pointer: Not Supported 00:15:19.649 Oversized SGL: Not Supported 00:15:19.649 SGL Metadata Address: Not Supported 00:15:19.649 SGL Offset: Supported 00:15:19.649 Transport SGL Data Block: Not Supported 00:15:19.649 Replay Protected Memory Block: Not Supported 00:15:19.649 00:15:19.649 Firmware Slot Information 00:15:19.649 ========================= 00:15:19.649 Active slot: 1 00:15:19.649 Slot 1 Firmware Revision: 24.01.1 00:15:19.649 00:15:19.649 00:15:19.649 Commands Supported and Effects 00:15:19.649 ============================== 00:15:19.649 Admin Commands 00:15:19.649 -------------- 00:15:19.649 Get Log Page (02h): Supported 00:15:19.649 Identify (06h): Supported 00:15:19.649 Abort (08h): Supported 00:15:19.649 Set Features (09h): Supported 00:15:19.649 Get Features (0Ah): Supported 00:15:19.649 Asynchronous Event Request (0Ch): Supported 00:15:19.649 Keep Alive (18h): Supported 00:15:19.649 I/O Commands 00:15:19.649 ------------ 00:15:19.649 Flush (00h): Supported LBA-Change 00:15:19.649 Write (01h): Supported LBA-Change 00:15:19.649 Read (02h): Supported 00:15:19.649 Compare (05h): Supported 00:15:19.649 Write Zeroes (08h): Supported LBA-Change 00:15:19.649 Dataset Management (09h): Supported LBA-Change 00:15:19.649 Copy (19h): Supported LBA-Change 00:15:19.649 Unknown (79h): Supported LBA-Change 00:15:19.649 Unknown (7Ah): Supported 00:15:19.649 00:15:19.649 Error Log 00:15:19.649 ========= 00:15:19.649 00:15:19.649 Arbitration 00:15:19.649 =========== 00:15:19.649 Arbitration Burst: 1 00:15:19.649 00:15:19.649 Power Management 00:15:19.649 ================ 00:15:19.649 Number of Power States: 1 00:15:19.649 Current Power State: Power State #0 00:15:19.649 Power State #0: 00:15:19.649 Max Power: 0.00 W 00:15:19.649 Non-Operational State: Operational 00:15:19.649 Entry Latency: Not Reported 00:15:19.649 Exit Latency: Not Reported 00:15:19.649 Relative Read Throughput: 0 00:15:19.649 Relative Read Latency: 0 00:15:19.649 Relative Write Throughput: 0 00:15:19.649 Relative Write Latency: 0 00:15:19.649 Idle Power: Not Reported 00:15:19.649 Active Power: Not Reported 00:15:19.649 Non-Operational Permissive Mode: Not Supported 00:15:19.649 00:15:19.649 Health Information 00:15:19.649 ================== 00:15:19.649 Critical Warnings: 00:15:19.649 Available Spare Space: OK 00:15:19.649 Temperature: OK 00:15:19.649 Device Reliability: OK 00:15:19.649 Read Only: No 00:15:19.649 Volatile Memory Backup: OK 00:15:19.649 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:19.649 Temperature Threshold: [2024-12-06 04:18:32.001851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.649 [2024-12-06 04:18:32.001858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.649 [2024-12-06 04:18:32.001861] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.649 [2024-12-06 04:18:32.001865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2e20) on tqpair=0x2466510 00:15:19.649 [2024-12-06 04:18:32.001879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.649 [2024-12-06 04:18:32.001885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.649 [2024-12-06 04:18:32.001889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.649 [2024-12-06 04:18:32.001893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b30e0) on tqpair=0x2466510 00:15:19.649 [2024-12-06 04:18:32.001902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.649 [2024-12-06 04:18:32.001908] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.649 [2024-12-06 04:18:32.001911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.001915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b3240) on tqpair=0x2466510 00:15:19.650 [2024-12-06 04:18:32.002030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2466510) 00:15:19.650 [2024-12-06 04:18:32.002049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.650 [2024-12-06 04:18:32.002072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b3240, cid 7, qid 0 00:15:19.650 [2024-12-06 04:18:32.002121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.650 [2024-12-06 04:18:32.002128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.650 [2024-12-06 04:18:32.002132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b3240) on tqpair=0x2466510 00:15:19.650 [2024-12-06 04:18:32.002171] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:19.650 [2024-12-06 04:18:32.002185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.650 [2024-12-06 04:18:32.002192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.650 [2024-12-06 04:18:32.002199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.650 [2024-12-06 04:18:32.002205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:19.650 [2024-12-06 04:18:32.002214] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2466510) 00:15:19.650 [2024-12-06 04:18:32.002231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.650 [2024-12-06 04:18:32.002253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2cc0, cid 3, qid 0 00:15:19.650 [2024-12-06 04:18:32.002303] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.650 [2024-12-06 04:18:32.002310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.650 [2024-12-06 04:18:32.002314] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002318] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2cc0) on tqpair=0x2466510 00:15:19.650 [2024-12-06 04:18:32.002327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.002335] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2466510) 00:15:19.650 [2024-12-06 04:18:32.002343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.650 [2024-12-06 04:18:32.002363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2cc0, cid 3, qid 0 00:15:19.650 [2024-12-06 04:18:32.006413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.650 [2024-12-06 04:18:32.006431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.650 [2024-12-06 04:18:32.006436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.006441] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2cc0) on tqpair=0x2466510 00:15:19.650 [2024-12-06 04:18:32.006447] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:19.650 [2024-12-06 04:18:32.006453] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:19.650 [2024-12-06 04:18:32.006465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.006471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.006475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2466510) 00:15:19.650 [2024-12-06 04:18:32.006484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:19.650 [2024-12-06 04:18:32.006508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b2cc0, cid 3, qid 0 00:15:19.650 [2024-12-06 04:18:32.006560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:19.650 [2024-12-06 04:18:32.006567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:19.650 [2024-12-06 04:18:32.006571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:19.650 [2024-12-06 04:18:32.006575] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24b2cc0) on tqpair=0x2466510 00:15:19.650 [2024-12-06 04:18:32.006585] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:15:19.650 0 Kelvin (-273 Celsius) 00:15:19.650 Available Spare: 0% 00:15:19.650 Available Spare Threshold: 0% 00:15:19.650 Life Percentage Used: 0% 00:15:19.650 Data Units Read: 0 00:15:19.650 Data Units Written: 0 00:15:19.650 Host Read Commands: 0 00:15:19.650 Host Write Commands: 0 00:15:19.650 Controller Busy Time: 0 minutes 00:15:19.650 Power Cycles: 0 00:15:19.650 Power On Hours: 0 hours 00:15:19.650 Unsafe Shutdowns: 0 00:15:19.650 Unrecoverable Media Errors: 0 00:15:19.650 Lifetime Error Log Entries: 0 00:15:19.650 Warning Temperature Time: 0 minutes 00:15:19.650 Critical Temperature Time: 0 minutes 00:15:19.650 00:15:19.650 Number of Queues 00:15:19.650 ================ 00:15:19.650 Number of I/O Submission Queues: 127 00:15:19.650 Number of I/O Completion Queues: 127 00:15:19.650 00:15:19.650 Active Namespaces 00:15:19.650 ================= 00:15:19.650 Namespace ID:1 00:15:19.650 Error Recovery Timeout: Unlimited 00:15:19.650 Command Set Identifier: NVM (00h) 00:15:19.650 Deallocate: Supported 00:15:19.650 Deallocated/Unwritten Error: Not Supported 00:15:19.650 Deallocated Read Value: Unknown 00:15:19.650 Deallocate in Write Zeroes: Not Supported 00:15:19.650 Deallocated Guard Field: 0xFFFF 00:15:19.650 Flush: Supported 00:15:19.650 Reservation: Supported 00:15:19.650 Namespace Sharing Capabilities: Multiple Controllers 00:15:19.650 Size (in LBAs): 131072 (0GiB) 00:15:19.650 Capacity (in LBAs): 131072 (0GiB) 00:15:19.650 Utilization (in LBAs): 131072 (0GiB) 00:15:19.650 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:19.650 EUI64: ABCDEF0123456789 00:15:19.650 UUID: 499ec7df-b737-4bd8-8eba-13ed63326b54 00:15:19.650 Thin Provisioning: Not Supported 00:15:19.650 Per-NS Atomic Units: Yes 00:15:19.650 Atomic Boundary Size (Normal): 0 00:15:19.650 Atomic Boundary Size (PFail): 0 00:15:19.650 Atomic Boundary Offset: 0 00:15:19.650 Maximum Single Source Range Length: 65535 00:15:19.650 Maximum Copy Length: 65535 00:15:19.650 Maximum Source Range Count: 1 00:15:19.650 NGUID/EUI64 Never Reused: No 00:15:19.650 Namespace Write Protected: No 00:15:19.650 Number of LBA Formats: 1 00:15:19.650 Current LBA Format: LBA Format #00 00:15:19.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:19.650 00:15:19.650 04:18:32 -- host/identify.sh@51 -- # sync 00:15:19.650 04:18:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.650 04:18:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.650 04:18:32 -- common/autotest_common.sh@10 -- # set +x 00:15:19.650 04:18:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.650 04:18:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:19.650 04:18:32 -- host/identify.sh@56 -- # nvmftestfini 00:15:19.650 04:18:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.650 04:18:32 -- nvmf/common.sh@116 -- # sync 00:15:19.650 04:18:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.650 04:18:32 -- nvmf/common.sh@119 -- # set +e 00:15:19.650 04:18:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.650 04:18:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.650 rmmod nvme_tcp 00:15:19.650 rmmod nvme_fabrics 00:15:19.650 rmmod nvme_keyring 00:15:19.650 04:18:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.650 04:18:32 -- nvmf/common.sh@123 -- # set -e 00:15:19.650 04:18:32 -- nvmf/common.sh@124 -- # return 0 00:15:19.650 04:18:32 -- nvmf/common.sh@477 -- # '[' -n 80784 ']' 00:15:19.650 04:18:32 -- nvmf/common.sh@478 -- # killprocess 80784 00:15:19.650 04:18:32 -- common/autotest_common.sh@936 -- # '[' -z 80784 ']' 00:15:19.650 04:18:32 -- common/autotest_common.sh@940 -- # kill -0 80784 00:15:19.650 04:18:32 -- common/autotest_common.sh@941 -- # uname 00:15:19.650 04:18:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.651 04:18:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80784 00:15:19.651 04:18:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.651 04:18:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.651 killing process with pid 80784 00:15:19.651 04:18:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80784' 00:15:19.651 04:18:32 -- common/autotest_common.sh@955 -- # kill 80784 00:15:19.651 [2024-12-06 04:18:32.173487] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:19.651 04:18:32 -- common/autotest_common.sh@960 -- # wait 80784 00:15:19.910 04:18:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.910 04:18:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.910 04:18:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.910 04:18:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.910 04:18:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.910 04:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.910 04:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.910 04:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.910 04:18:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:19.910 00:15:19.910 real 0m2.565s 00:15:19.910 user 0m7.201s 00:15:19.910 sys 0m0.649s 00:15:19.910 04:18:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:19.910 04:18:32 -- common/autotest_common.sh@10 -- # set +x 00:15:19.910 ************************************ 00:15:19.910 END TEST nvmf_identify 00:15:19.910 ************************************ 00:15:20.169 04:18:32 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:20.169 04:18:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:20.169 04:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.169 04:18:32 -- common/autotest_common.sh@10 -- # set +x 00:15:20.169 ************************************ 00:15:20.169 START TEST nvmf_perf 00:15:20.169 ************************************ 00:15:20.169 04:18:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:20.169 * Looking for test storage... 00:15:20.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:20.169 04:18:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:20.169 04:18:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:20.169 04:18:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.169 04:18:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.169 04:18:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.169 04:18:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.169 04:18:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.169 04:18:32 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.169 04:18:32 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.169 04:18:32 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.169 04:18:32 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.169 04:18:32 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.169 04:18:32 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.169 04:18:32 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.169 04:18:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.169 04:18:32 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.169 04:18:32 -- scripts/common.sh@344 -- # : 1 00:15:20.169 04:18:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.169 04:18:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.170 04:18:32 -- scripts/common.sh@364 -- # decimal 1 00:15:20.170 04:18:32 -- scripts/common.sh@352 -- # local d=1 00:15:20.170 04:18:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.170 04:18:32 -- scripts/common.sh@354 -- # echo 1 00:15:20.170 04:18:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.170 04:18:32 -- scripts/common.sh@365 -- # decimal 2 00:15:20.170 04:18:32 -- scripts/common.sh@352 -- # local d=2 00:15:20.170 04:18:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.170 04:18:32 -- scripts/common.sh@354 -- # echo 2 00:15:20.170 04:18:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.170 04:18:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.170 04:18:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.170 04:18:32 -- scripts/common.sh@367 -- # return 0 00:15:20.170 04:18:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.170 04:18:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.170 --rc genhtml_branch_coverage=1 00:15:20.170 --rc genhtml_function_coverage=1 00:15:20.170 --rc genhtml_legend=1 00:15:20.170 --rc geninfo_all_blocks=1 00:15:20.170 --rc geninfo_unexecuted_blocks=1 00:15:20.170 00:15:20.170 ' 00:15:20.170 04:18:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.170 --rc genhtml_branch_coverage=1 00:15:20.170 --rc genhtml_function_coverage=1 00:15:20.170 --rc genhtml_legend=1 00:15:20.170 --rc geninfo_all_blocks=1 00:15:20.170 --rc geninfo_unexecuted_blocks=1 00:15:20.170 00:15:20.170 ' 00:15:20.170 04:18:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.170 --rc genhtml_branch_coverage=1 00:15:20.170 --rc genhtml_function_coverage=1 00:15:20.170 --rc genhtml_legend=1 00:15:20.170 --rc geninfo_all_blocks=1 00:15:20.170 --rc geninfo_unexecuted_blocks=1 00:15:20.170 00:15:20.170 ' 00:15:20.170 04:18:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.170 --rc genhtml_branch_coverage=1 00:15:20.170 --rc genhtml_function_coverage=1 00:15:20.170 --rc genhtml_legend=1 00:15:20.170 --rc geninfo_all_blocks=1 00:15:20.170 --rc geninfo_unexecuted_blocks=1 00:15:20.170 00:15:20.170 ' 00:15:20.170 04:18:32 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.170 04:18:32 -- nvmf/common.sh@7 -- # uname -s 00:15:20.170 04:18:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.170 04:18:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.170 04:18:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.170 04:18:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.170 04:18:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.170 04:18:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.170 04:18:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.170 04:18:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.170 04:18:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.170 04:18:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:15:20.170 04:18:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:15:20.170 04:18:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.170 04:18:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.170 04:18:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.170 04:18:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.170 04:18:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.170 04:18:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.170 04:18:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.170 04:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.170 04:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.170 04:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.170 04:18:32 -- paths/export.sh@5 -- # export PATH 00:15:20.170 04:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.170 04:18:32 -- nvmf/common.sh@46 -- # : 0 00:15:20.170 04:18:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.170 04:18:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.170 04:18:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.170 04:18:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.170 04:18:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.170 04:18:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.170 04:18:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.170 04:18:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.170 04:18:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:20.170 04:18:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:20.170 04:18:32 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.170 04:18:32 -- host/perf.sh@17 -- # nvmftestinit 00:15:20.170 04:18:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.170 04:18:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.170 04:18:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.170 04:18:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.170 04:18:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.170 04:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.170 04:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.170 04:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.170 04:18:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.170 04:18:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.170 04:18:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.170 04:18:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.170 04:18:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.170 04:18:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.170 04:18:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.170 04:18:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.170 04:18:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.170 04:18:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.170 04:18:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.170 04:18:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.170 04:18:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.170 04:18:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.170 04:18:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.170 04:18:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.170 Cannot find device "nvmf_tgt_br" 00:15:20.170 04:18:32 -- nvmf/common.sh@154 -- # true 00:15:20.170 04:18:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.430 Cannot find device "nvmf_tgt_br2" 00:15:20.430 04:18:32 -- nvmf/common.sh@155 -- # true 00:15:20.430 04:18:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.430 04:18:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.430 Cannot find device "nvmf_tgt_br" 00:15:20.430 04:18:32 -- nvmf/common.sh@157 -- # true 00:15:20.430 04:18:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.430 Cannot find device "nvmf_tgt_br2" 00:15:20.430 04:18:32 -- nvmf/common.sh@158 -- # true 00:15:20.430 04:18:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.430 04:18:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.430 04:18:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.430 04:18:32 -- nvmf/common.sh@161 -- # true 00:15:20.430 04:18:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.430 04:18:32 -- nvmf/common.sh@162 -- # true 00:15:20.430 04:18:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.430 04:18:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.430 04:18:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.430 04:18:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.430 04:18:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.430 04:18:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.430 04:18:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.430 04:18:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.430 04:18:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.430 04:18:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.430 04:18:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.430 04:18:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.430 04:18:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.430 04:18:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.430 04:18:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.430 04:18:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.430 04:18:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.430 04:18:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.430 04:18:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.430 04:18:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.689 04:18:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.689 04:18:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.689 04:18:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.689 04:18:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:15:20.689 00:15:20.689 --- 10.0.0.2 ping statistics --- 00:15:20.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.689 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:20.689 04:18:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:20.689 00:15:20.689 --- 10.0.0.3 ping statistics --- 00:15:20.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.689 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:20.689 04:18:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:20.689 00:15:20.689 --- 10.0.0.1 ping statistics --- 00:15:20.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.689 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:20.689 04:18:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.689 04:18:33 -- nvmf/common.sh@421 -- # return 0 00:15:20.689 04:18:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.689 04:18:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.689 04:18:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.689 04:18:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.689 04:18:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.689 04:18:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.689 04:18:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.689 04:18:33 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:20.689 04:18:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.689 04:18:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.689 04:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:20.689 04:18:33 -- nvmf/common.sh@469 -- # nvmfpid=81004 00:15:20.689 04:18:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.689 04:18:33 -- nvmf/common.sh@470 -- # waitforlisten 81004 00:15:20.689 04:18:33 -- common/autotest_common.sh@829 -- # '[' -z 81004 ']' 00:15:20.689 04:18:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.689 04:18:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.689 04:18:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.690 04:18:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.690 04:18:33 -- common/autotest_common.sh@10 -- # set +x 00:15:20.690 [2024-12-06 04:18:33.102684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.690 [2024-12-06 04:18:33.102817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.690 [2024-12-06 04:18:33.242634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.949 [2024-12-06 04:18:33.331355] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.949 [2024-12-06 04:18:33.331526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.949 [2024-12-06 04:18:33.331540] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.949 [2024-12-06 04:18:33.331549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.949 [2024-12-06 04:18:33.331949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.949 [2024-12-06 04:18:33.332108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.949 [2024-12-06 04:18:33.332202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.949 [2024-12-06 04:18:33.332208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.885 04:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.885 04:18:34 -- common/autotest_common.sh@862 -- # return 0 00:15:21.885 04:18:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.885 04:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.885 04:18:34 -- common/autotest_common.sh@10 -- # set +x 00:15:21.885 04:18:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.885 04:18:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:21.885 04:18:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:22.144 04:18:34 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:22.144 04:18:34 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:22.403 04:18:34 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:15:22.403 04:18:34 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.663 04:18:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:22.663 04:18:35 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:15:22.663 04:18:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:22.663 04:18:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:22.663 04:18:35 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:22.921 [2024-12-06 04:18:35.388739] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.921 04:18:35 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.179 04:18:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:23.179 04:18:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.437 04:18:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:23.437 04:18:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:23.695 04:18:36 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.954 [2024-12-06 04:18:36.374766] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.954 04:18:36 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.213 04:18:36 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:24.213 04:18:36 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:24.213 04:18:36 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:24.213 04:18:36 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:25.149 Initializing NVMe Controllers 00:15:25.149 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:25.149 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:25.149 Initialization complete. Launching workers. 00:15:25.149 ======================================================== 00:15:25.149 Latency(us) 00:15:25.149 Device Information : IOPS MiB/s Average min max 00:15:25.149 PCIE (0000:00:06.0) NSID 1 from core 0: 24160.71 94.38 1323.97 372.69 7843.79 00:15:25.149 ======================================================== 00:15:25.149 Total : 24160.71 94.38 1323.97 372.69 7843.79 00:15:25.149 00:15:25.469 04:18:37 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:26.863 Initializing NVMe Controllers 00:15:26.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:26.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:26.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:26.863 Initialization complete. Launching workers. 00:15:26.863 ======================================================== 00:15:26.863 Latency(us) 00:15:26.863 Device Information : IOPS MiB/s Average min max 00:15:26.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3601.47 14.07 276.15 101.08 7223.16 00:15:26.863 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8136.61 7029.33 12048.23 00:15:26.863 ======================================================== 00:15:26.863 Total : 3725.35 14.55 537.53 101.08 12048.23 00:15:26.863 00:15:26.863 04:18:39 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:28.242 Initializing NVMe Controllers 00:15:28.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:28.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:28.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:28.242 Initialization complete. Launching workers. 00:15:28.242 ======================================================== 00:15:28.242 Latency(us) 00:15:28.242 Device Information : IOPS MiB/s Average min max 00:15:28.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8782.15 34.31 3644.34 481.79 7941.90 00:15:28.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4008.24 15.66 8001.77 5989.79 9520.68 00:15:28.242 ======================================================== 00:15:28.242 Total : 12790.40 49.96 5009.87 481.79 9520.68 00:15:28.242 00:15:28.242 04:18:40 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:28.242 04:18:40 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:30.776 Initializing NVMe Controllers 00:15:30.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.776 Controller IO queue size 128, less than required. 00:15:30.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.776 Controller IO queue size 128, less than required. 00:15:30.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:30.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:30.776 Initialization complete. Launching workers. 00:15:30.776 ======================================================== 00:15:30.776 Latency(us) 00:15:30.776 Device Information : IOPS MiB/s Average min max 00:15:30.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1868.11 467.03 70607.97 39946.58 137947.86 00:15:30.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.61 166.40 194626.66 73238.25 307985.74 00:15:30.776 ======================================================== 00:15:30.776 Total : 2533.72 633.43 103187.91 39946.58 307985.74 00:15:30.776 00:15:30.776 04:18:42 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:30.776 No valid NVMe controllers or AIO or URING devices found 00:15:30.776 Initializing NVMe Controllers 00:15:30.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.776 Controller IO queue size 128, less than required. 00:15:30.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.776 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:30.776 Controller IO queue size 128, less than required. 00:15:30.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:30.776 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:30.776 WARNING: Some requested NVMe devices were skipped 00:15:30.776 04:18:43 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:33.311 Initializing NVMe Controllers 00:15:33.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.311 Controller IO queue size 128, less than required. 00:15:33.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.311 Controller IO queue size 128, less than required. 00:15:33.311 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:33.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:33.311 Initialization complete. Launching workers. 00:15:33.311 00:15:33.311 ==================== 00:15:33.311 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:33.311 TCP transport: 00:15:33.311 polls: 6910 00:15:33.311 idle_polls: 0 00:15:33.311 sock_completions: 6910 00:15:33.311 nvme_completions: 6259 00:15:33.311 submitted_requests: 9633 00:15:33.311 queued_requests: 1 00:15:33.311 00:15:33.311 ==================== 00:15:33.311 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:33.311 TCP transport: 00:15:33.311 polls: 6950 00:15:33.311 idle_polls: 0 00:15:33.311 sock_completions: 6950 00:15:33.311 nvme_completions: 6754 00:15:33.311 submitted_requests: 10238 00:15:33.311 queued_requests: 1 00:15:33.311 ======================================================== 00:15:33.311 Latency(us) 00:15:33.311 Device Information : IOPS MiB/s Average min max 00:15:33.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1628.31 407.08 79239.69 39890.19 129792.69 00:15:33.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1751.79 437.95 73266.22 39394.77 141427.97 00:15:33.311 ======================================================== 00:15:33.311 Total : 3380.10 845.03 76143.84 39394.77 141427.97 00:15:33.311 00:15:33.311 04:18:45 -- host/perf.sh@66 -- # sync 00:15:33.311 04:18:45 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.569 04:18:45 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:33.569 04:18:45 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:33.569 04:18:45 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:33.828 04:18:46 -- host/perf.sh@72 -- # ls_guid=0eb206fc-c618-475d-bb83-e5b60fe0705f 00:15:33.828 04:18:46 -- host/perf.sh@73 -- # get_lvs_free_mb 0eb206fc-c618-475d-bb83-e5b60fe0705f 00:15:33.828 04:18:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=0eb206fc-c618-475d-bb83-e5b60fe0705f 00:15:33.828 04:18:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:33.828 04:18:46 -- common/autotest_common.sh@1355 -- # local fc 00:15:33.828 04:18:46 -- common/autotest_common.sh@1356 -- # local cs 00:15:33.828 04:18:46 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:34.086 04:18:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:34.086 { 00:15:34.086 "uuid": "0eb206fc-c618-475d-bb83-e5b60fe0705f", 00:15:34.086 "name": "lvs_0", 00:15:34.086 "base_bdev": "Nvme0n1", 00:15:34.086 "total_data_clusters": 1278, 00:15:34.086 "free_clusters": 1278, 00:15:34.086 "block_size": 4096, 00:15:34.086 "cluster_size": 4194304 00:15:34.086 } 00:15:34.086 ]' 00:15:34.086 04:18:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="0eb206fc-c618-475d-bb83-e5b60fe0705f") .free_clusters' 00:15:34.086 04:18:46 -- common/autotest_common.sh@1358 -- # fc=1278 00:15:34.086 04:18:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="0eb206fc-c618-475d-bb83-e5b60fe0705f") .cluster_size' 00:15:34.086 5112 00:15:34.086 04:18:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:34.086 04:18:46 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:15:34.086 04:18:46 -- common/autotest_common.sh@1363 -- # echo 5112 00:15:34.086 04:18:46 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:34.086 04:18:46 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0eb206fc-c618-475d-bb83-e5b60fe0705f lbd_0 5112 00:15:34.344 04:18:46 -- host/perf.sh@80 -- # lb_guid=93b84732-dad3-414d-9393-e50745d259eb 00:15:34.344 04:18:46 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 93b84732-dad3-414d-9393-e50745d259eb lvs_n_0 00:15:34.603 04:18:47 -- host/perf.sh@83 -- # ls_nested_guid=e915a3ae-35be-4e5c-a758-1a41760d6bca 00:15:34.603 04:18:47 -- host/perf.sh@84 -- # get_lvs_free_mb e915a3ae-35be-4e5c-a758-1a41760d6bca 00:15:34.603 04:18:47 -- common/autotest_common.sh@1353 -- # local lvs_uuid=e915a3ae-35be-4e5c-a758-1a41760d6bca 00:15:34.603 04:18:47 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:34.603 04:18:47 -- common/autotest_common.sh@1355 -- # local fc 00:15:34.603 04:18:47 -- common/autotest_common.sh@1356 -- # local cs 00:15:34.603 04:18:47 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:34.862 04:18:47 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:34.862 { 00:15:34.862 "uuid": "0eb206fc-c618-475d-bb83-e5b60fe0705f", 00:15:34.862 "name": "lvs_0", 00:15:34.862 "base_bdev": "Nvme0n1", 00:15:34.862 "total_data_clusters": 1278, 00:15:34.862 "free_clusters": 0, 00:15:34.862 "block_size": 4096, 00:15:34.862 "cluster_size": 4194304 00:15:34.862 }, 00:15:34.862 { 00:15:34.862 "uuid": "e915a3ae-35be-4e5c-a758-1a41760d6bca", 00:15:34.862 "name": "lvs_n_0", 00:15:34.862 "base_bdev": "93b84732-dad3-414d-9393-e50745d259eb", 00:15:34.862 "total_data_clusters": 1276, 00:15:34.862 "free_clusters": 1276, 00:15:34.862 "block_size": 4096, 00:15:34.862 "cluster_size": 4194304 00:15:34.862 } 00:15:34.862 ]' 00:15:34.862 04:18:47 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="e915a3ae-35be-4e5c-a758-1a41760d6bca") .free_clusters' 00:15:35.121 04:18:47 -- common/autotest_common.sh@1358 -- # fc=1276 00:15:35.121 04:18:47 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="e915a3ae-35be-4e5c-a758-1a41760d6bca") .cluster_size' 00:15:35.121 5104 00:15:35.121 04:18:47 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:35.121 04:18:47 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:15:35.121 04:18:47 -- common/autotest_common.sh@1363 -- # echo 5104 00:15:35.121 04:18:47 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:35.121 04:18:47 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e915a3ae-35be-4e5c-a758-1a41760d6bca lbd_nest_0 5104 00:15:35.379 04:18:47 -- host/perf.sh@88 -- # lb_nested_guid=a16d2948-ca04-4a50-8212-d22dfb3b3f02 00:15:35.379 04:18:47 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.638 04:18:48 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:35.638 04:18:48 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a16d2948-ca04-4a50-8212-d22dfb3b3f02 00:15:35.896 04:18:48 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.154 04:18:48 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:36.154 04:18:48 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:36.154 04:18:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:36.154 04:18:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:36.154 04:18:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:36.412 No valid NVMe controllers or AIO or URING devices found 00:15:36.412 Initializing NVMe Controllers 00:15:36.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.412 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:36.412 WARNING: Some requested NVMe devices were skipped 00:15:36.412 04:18:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:36.412 04:18:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:48.658 Initializing NVMe Controllers 00:15:48.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.658 Initialization complete. Launching workers. 00:15:48.658 ======================================================== 00:15:48.658 Latency(us) 00:15:48.659 Device Information : IOPS MiB/s Average min max 00:15:48.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 944.50 118.06 1057.53 340.25 7696.79 00:15:48.659 ======================================================== 00:15:48.659 Total : 944.50 118.06 1057.53 340.25 7696.79 00:15:48.659 00:15:48.659 04:18:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:48.659 04:18:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:48.659 04:18:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:48.659 No valid NVMe controllers or AIO or URING devices found 00:15:48.659 Initializing NVMe Controllers 00:15:48.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.659 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:48.659 WARNING: Some requested NVMe devices were skipped 00:15:48.659 04:18:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:48.659 04:18:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:58.634 Initializing NVMe Controllers 00:15:58.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:58.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:58.634 Initialization complete. Launching workers. 00:15:58.634 ======================================================== 00:15:58.634 Latency(us) 00:15:58.634 Device Information : IOPS MiB/s Average min max 00:15:58.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1315.59 164.45 24342.84 7336.29 71764.60 00:15:58.634 ======================================================== 00:15:58.634 Total : 1315.59 164.45 24342.84 7336.29 71764.60 00:15:58.634 00:15:58.634 04:19:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:58.634 04:19:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:58.634 04:19:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:58.634 No valid NVMe controllers or AIO or URING devices found 00:15:58.634 Initializing NVMe Controllers 00:15:58.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:58.634 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:58.634 WARNING: Some requested NVMe devices were skipped 00:15:58.634 04:19:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:58.634 04:19:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:08.625 Initializing NVMe Controllers 00:16:08.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:08.625 Controller IO queue size 128, less than required. 00:16:08.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:08.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:08.625 Initialization complete. Launching workers. 00:16:08.626 ======================================================== 00:16:08.626 Latency(us) 00:16:08.626 Device Information : IOPS MiB/s Average min max 00:16:08.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4061.60 507.70 31559.16 11628.37 61605.86 00:16:08.626 ======================================================== 00:16:08.626 Total : 4061.60 507.70 31559.16 11628.37 61605.86 00:16:08.626 00:16:08.626 04:19:20 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.626 04:19:20 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a16d2948-ca04-4a50-8212-d22dfb3b3f02 00:16:08.626 04:19:20 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:08.626 04:19:21 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 93b84732-dad3-414d-9393-e50745d259eb 00:16:08.884 04:19:21 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:09.144 04:19:21 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:09.144 04:19:21 -- host/perf.sh@114 -- # nvmftestfini 00:16:09.144 04:19:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:09.144 04:19:21 -- nvmf/common.sh@116 -- # sync 00:16:09.144 04:19:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:09.144 04:19:21 -- nvmf/common.sh@119 -- # set +e 00:16:09.144 04:19:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:09.144 04:19:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:09.144 rmmod nvme_tcp 00:16:09.404 rmmod nvme_fabrics 00:16:09.404 rmmod nvme_keyring 00:16:09.404 04:19:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:09.404 04:19:21 -- nvmf/common.sh@123 -- # set -e 00:16:09.404 04:19:21 -- nvmf/common.sh@124 -- # return 0 00:16:09.404 04:19:21 -- nvmf/common.sh@477 -- # '[' -n 81004 ']' 00:16:09.404 04:19:21 -- nvmf/common.sh@478 -- # killprocess 81004 00:16:09.404 04:19:21 -- common/autotest_common.sh@936 -- # '[' -z 81004 ']' 00:16:09.404 04:19:21 -- common/autotest_common.sh@940 -- # kill -0 81004 00:16:09.404 04:19:21 -- common/autotest_common.sh@941 -- # uname 00:16:09.404 04:19:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.404 04:19:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81004 00:16:09.404 killing process with pid 81004 00:16:09.404 04:19:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.404 04:19:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.404 04:19:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81004' 00:16:09.404 04:19:21 -- common/autotest_common.sh@955 -- # kill 81004 00:16:09.404 04:19:21 -- common/autotest_common.sh@960 -- # wait 81004 00:16:11.310 04:19:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.310 04:19:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.310 04:19:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.310 04:19:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.310 04:19:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.310 04:19:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.310 04:19:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.310 04:19:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.310 04:19:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:11.310 00:16:11.310 real 0m50.903s 00:16:11.310 user 3m9.771s 00:16:11.310 sys 0m13.572s 00:16:11.310 04:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:11.310 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.310 ************************************ 00:16:11.310 END TEST nvmf_perf 00:16:11.310 ************************************ 00:16:11.310 04:19:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:11.310 04:19:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.310 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.310 ************************************ 00:16:11.310 START TEST nvmf_fio_host 00:16:11.310 ************************************ 00:16:11.310 04:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:11.310 * Looking for test storage... 00:16:11.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:11.310 04:19:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:11.310 04:19:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:11.310 04:19:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:11.310 04:19:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:11.310 04:19:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:11.310 04:19:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:11.310 04:19:23 -- scripts/common.sh@335 -- # IFS=.-: 00:16:11.310 04:19:23 -- scripts/common.sh@335 -- # read -ra ver1 00:16:11.310 04:19:23 -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.310 04:19:23 -- scripts/common.sh@336 -- # read -ra ver2 00:16:11.310 04:19:23 -- scripts/common.sh@337 -- # local 'op=<' 00:16:11.310 04:19:23 -- scripts/common.sh@339 -- # ver1_l=2 00:16:11.310 04:19:23 -- scripts/common.sh@340 -- # ver2_l=1 00:16:11.310 04:19:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:11.310 04:19:23 -- scripts/common.sh@343 -- # case "$op" in 00:16:11.310 04:19:23 -- scripts/common.sh@344 -- # : 1 00:16:11.310 04:19:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:11.310 04:19:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.310 04:19:23 -- scripts/common.sh@364 -- # decimal 1 00:16:11.310 04:19:23 -- scripts/common.sh@352 -- # local d=1 00:16:11.310 04:19:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.310 04:19:23 -- scripts/common.sh@354 -- # echo 1 00:16:11.310 04:19:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:11.310 04:19:23 -- scripts/common.sh@365 -- # decimal 2 00:16:11.310 04:19:23 -- scripts/common.sh@352 -- # local d=2 00:16:11.310 04:19:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.310 04:19:23 -- scripts/common.sh@354 -- # echo 2 00:16:11.310 04:19:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:11.310 04:19:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:11.310 04:19:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:11.310 04:19:23 -- scripts/common.sh@367 -- # return 0 00:16:11.310 04:19:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.310 --rc genhtml_branch_coverage=1 00:16:11.310 --rc genhtml_function_coverage=1 00:16:11.310 --rc genhtml_legend=1 00:16:11.310 --rc geninfo_all_blocks=1 00:16:11.310 --rc geninfo_unexecuted_blocks=1 00:16:11.310 00:16:11.310 ' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.310 --rc genhtml_branch_coverage=1 00:16:11.310 --rc genhtml_function_coverage=1 00:16:11.310 --rc genhtml_legend=1 00:16:11.310 --rc geninfo_all_blocks=1 00:16:11.310 --rc geninfo_unexecuted_blocks=1 00:16:11.310 00:16:11.310 ' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.310 --rc genhtml_branch_coverage=1 00:16:11.310 --rc genhtml_function_coverage=1 00:16:11.310 --rc genhtml_legend=1 00:16:11.310 --rc geninfo_all_blocks=1 00:16:11.310 --rc geninfo_unexecuted_blocks=1 00:16:11.310 00:16:11.310 ' 00:16:11.310 04:19:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.310 --rc genhtml_branch_coverage=1 00:16:11.310 --rc genhtml_function_coverage=1 00:16:11.310 --rc genhtml_legend=1 00:16:11.310 --rc geninfo_all_blocks=1 00:16:11.310 --rc geninfo_unexecuted_blocks=1 00:16:11.310 00:16:11.310 ' 00:16:11.310 04:19:23 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.310 04:19:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.310 04:19:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.310 04:19:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.310 04:19:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.310 04:19:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.310 04:19:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.310 04:19:23 -- paths/export.sh@5 -- # export PATH 00:16:11.310 04:19:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.310 04:19:23 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:11.310 04:19:23 -- nvmf/common.sh@7 -- # uname -s 00:16:11.310 04:19:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.310 04:19:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.310 04:19:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.310 04:19:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.310 04:19:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.310 04:19:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.310 04:19:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.310 04:19:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.310 04:19:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.310 04:19:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.310 04:19:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:16:11.310 04:19:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:16:11.310 04:19:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.310 04:19:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.310 04:19:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:11.310 04:19:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:11.310 04:19:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.310 04:19:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.310 04:19:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.310 04:19:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.311 04:19:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.311 04:19:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.311 04:19:23 -- paths/export.sh@5 -- # export PATH 00:16:11.311 04:19:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.311 04:19:23 -- nvmf/common.sh@46 -- # : 0 00:16:11.311 04:19:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.311 04:19:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.311 04:19:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.311 04:19:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.311 04:19:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.311 04:19:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.311 04:19:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.311 04:19:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.311 04:19:23 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:11.311 04:19:23 -- host/fio.sh@14 -- # nvmftestinit 00:16:11.311 04:19:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:11.311 04:19:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.311 04:19:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:11.311 04:19:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:11.311 04:19:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:11.311 04:19:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.311 04:19:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.311 04:19:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.311 04:19:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:11.311 04:19:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:11.311 04:19:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:11.311 04:19:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:11.311 04:19:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:11.311 04:19:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:11.311 04:19:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.311 04:19:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.311 04:19:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:11.311 04:19:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:11.311 04:19:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:11.311 04:19:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:11.311 04:19:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:11.311 04:19:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.311 04:19:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:11.311 04:19:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:11.311 04:19:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:11.311 04:19:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:11.311 04:19:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:11.311 04:19:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:11.311 Cannot find device "nvmf_tgt_br" 00:16:11.311 04:19:23 -- nvmf/common.sh@154 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.311 Cannot find device "nvmf_tgt_br2" 00:16:11.311 04:19:23 -- nvmf/common.sh@155 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:11.311 04:19:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:11.311 Cannot find device "nvmf_tgt_br" 00:16:11.311 04:19:23 -- nvmf/common.sh@157 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:11.311 Cannot find device "nvmf_tgt_br2" 00:16:11.311 04:19:23 -- nvmf/common.sh@158 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:11.311 04:19:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:11.311 04:19:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.311 04:19:23 -- nvmf/common.sh@161 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.311 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.311 04:19:23 -- nvmf/common.sh@162 -- # true 00:16:11.311 04:19:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.311 04:19:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.311 04:19:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.311 04:19:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.311 04:19:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.311 04:19:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.311 04:19:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.311 04:19:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.311 04:19:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.311 04:19:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:11.311 04:19:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:11.571 04:19:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:11.571 04:19:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:11.571 04:19:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.571 04:19:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.571 04:19:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.571 04:19:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:11.571 04:19:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:11.571 04:19:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.571 04:19:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.571 04:19:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.571 04:19:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.571 04:19:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.571 04:19:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:11.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:11.571 00:16:11.571 --- 10.0.0.2 ping statistics --- 00:16:11.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.571 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:11.571 04:19:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:11.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:11.571 00:16:11.571 --- 10.0.0.3 ping statistics --- 00:16:11.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.571 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:11.571 04:19:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:11.571 00:16:11.571 --- 10.0.0.1 ping statistics --- 00:16:11.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.571 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:11.571 04:19:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.571 04:19:23 -- nvmf/common.sh@421 -- # return 0 00:16:11.571 04:19:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:11.571 04:19:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.571 04:19:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:11.571 04:19:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:11.571 04:19:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.571 04:19:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:11.571 04:19:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:11.571 04:19:23 -- host/fio.sh@16 -- # [[ y != y ]] 00:16:11.571 04:19:23 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:11.571 04:19:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.571 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.571 04:19:23 -- host/fio.sh@24 -- # nvmfpid=81825 00:16:11.571 04:19:23 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:11.571 04:19:23 -- host/fio.sh@28 -- # waitforlisten 81825 00:16:11.571 04:19:23 -- common/autotest_common.sh@829 -- # '[' -z 81825 ']' 00:16:11.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.571 04:19:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.571 04:19:23 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.571 04:19:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.571 04:19:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.571 04:19:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.571 04:19:23 -- common/autotest_common.sh@10 -- # set +x 00:16:11.571 [2024-12-06 04:19:24.037941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:11.571 [2024-12-06 04:19:24.038047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.831 [2024-12-06 04:19:24.184787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.831 [2024-12-06 04:19:24.268475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.831 [2024-12-06 04:19:24.268627] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.831 [2024-12-06 04:19:24.268641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.831 [2024-12-06 04:19:24.268649] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.831 [2024-12-06 04:19:24.268819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.831 [2024-12-06 04:19:24.269152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.831 [2024-12-06 04:19:24.269760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.831 [2024-12-06 04:19:24.269820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.768 04:19:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.768 04:19:25 -- common/autotest_common.sh@862 -- # return 0 00:16:12.768 04:19:25 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.768 [2024-12-06 04:19:25.222117] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.768 04:19:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:12.768 04:19:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.768 04:19:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.768 04:19:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.026 Malloc1 00:16:13.284 04:19:25 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:13.543 04:19:25 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:13.802 04:19:26 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.061 [2024-12-06 04:19:26.368597] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.061 04:19:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.061 04:19:26 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:14.061 04:19:26 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:14.061 04:19:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:14.061 04:19:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:14.061 04:19:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:14.061 04:19:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:14.061 04:19:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.061 04:19:26 -- common/autotest_common.sh@1330 -- # shift 00:16:14.061 04:19:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:14.061 04:19:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.061 04:19:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:14.061 04:19:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.061 04:19:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:14.319 04:19:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:14.319 04:19:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:14.319 04:19:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:14.319 04:19:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:14.319 04:19:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:14.319 04:19:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:14.319 04:19:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:14.319 04:19:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:14.319 04:19:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:14.319 04:19:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:14.319 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:14.319 fio-3.35 00:16:14.319 Starting 1 thread 00:16:16.909 00:16:16.909 test: (groupid=0, jobs=1): err= 0: pid=81908: Fri Dec 6 04:19:29 2024 00:16:16.909 read: IOPS=9304, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec) 00:16:16.909 slat (nsec): min=1898, max=2502.0k, avg=2607.85, stdev=18543.52 00:16:16.909 clat (usec): min=2370, max=12620, avg=7155.32, stdev=528.95 00:16:16.909 lat (usec): min=2402, max=12622, avg=7157.93, stdev=528.29 00:16:16.909 clat percentiles (usec): 00:16:16.909 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6783], 00:16:16.909 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7242], 00:16:16.909 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:16:16.909 | 99.00th=[ 8455], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[11994], 00:16:16.909 | 99.99th=[12518] 00:16:16.909 bw ( KiB/s): min=36120, max=37976, per=99.97%, avg=37210.00, stdev=792.99, samples=4 00:16:16.909 iops : min= 9030, max= 9494, avg=9302.50, stdev=198.25, samples=4 00:16:16.909 write: IOPS=9308, BW=36.4MiB/s (38.1MB/s)(73.0MiB/2007msec); 0 zone resets 00:16:16.909 slat (usec): min=2, max=232, avg= 2.58, stdev= 2.17 00:16:16.909 clat (usec): min=2219, max=12283, avg=6529.32, stdev=475.86 00:16:16.909 lat (usec): min=2231, max=12285, avg=6531.89, stdev=475.71 00:16:16.909 clat percentiles (usec): 00:16:16.909 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6194], 00:16:16.909 | 30.00th=[ 6325], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:16:16.909 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:16:16.909 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 9372], 99.95th=[10290], 00:16:16.909 | 99.99th=[11994] 00:16:16.909 bw ( KiB/s): min=37064, max=37632, per=100.00%, avg=37250.00, stdev=259.34, samples=4 00:16:16.909 iops : min= 9266, max= 9408, avg=9312.50, stdev=64.84, samples=4 00:16:16.909 lat (msec) : 4=0.12%, 10=99.76%, 20=0.11% 00:16:16.909 cpu : usr=68.34%, sys=23.93%, ctx=11, majf=0, minf=5 00:16:16.909 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:16.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.909 issued rwts: total=18675,18682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.909 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.909 00:16:16.909 Run status group 0 (all jobs): 00:16:16.909 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.5MB), run=2007-2007msec 00:16:16.909 WRITE: bw=36.4MiB/s (38.1MB/s), 36.4MiB/s-36.4MiB/s (38.1MB/s-38.1MB/s), io=73.0MiB (76.5MB), run=2007-2007msec 00:16:16.909 04:19:29 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:16.909 04:19:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:16.909 04:19:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:16.909 04:19:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:16.909 04:19:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:16.909 04:19:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:16.909 04:19:29 -- common/autotest_common.sh@1330 -- # shift 00:16:16.909 04:19:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:16.909 04:19:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:16.909 04:19:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:16.909 04:19:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:16.909 04:19:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:16.909 04:19:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:16.909 04:19:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:16.910 04:19:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:16.910 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:16.910 fio-3.35 00:16:16.910 Starting 1 thread 00:16:19.442 00:16:19.442 test: (groupid=0, jobs=1): err= 0: pid=81957: Fri Dec 6 04:19:31 2024 00:16:19.442 read: IOPS=8517, BW=133MiB/s (140MB/s)(268MiB/2010msec) 00:16:19.442 slat (usec): min=2, max=113, avg= 3.92, stdev= 2.43 00:16:19.442 clat (usec): min=1966, max=19239, avg=8217.30, stdev=2699.55 00:16:19.442 lat (usec): min=1969, max=19242, avg=8221.22, stdev=2699.62 00:16:19.442 clat percentiles (usec): 00:16:19.442 | 1.00th=[ 3982], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5800], 00:16:19.442 | 30.00th=[ 6456], 40.00th=[ 7177], 50.00th=[ 7832], 60.00th=[ 8455], 00:16:19.442 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11863], 95.00th=[13435], 00:16:19.442 | 99.00th=[15926], 99.50th=[16712], 99.90th=[18220], 99.95th=[18482], 00:16:19.442 | 99.99th=[19268] 00:16:19.442 bw ( KiB/s): min=62976, max=77696, per=51.48%, avg=70152.00, stdev=6230.96, samples=4 00:16:19.442 iops : min= 3936, max= 4856, avg=4384.50, stdev=389.44, samples=4 00:16:19.442 write: IOPS=4921, BW=76.9MiB/s (80.6MB/s)(142MiB/1852msec); 0 zone resets 00:16:19.442 slat (usec): min=32, max=358, avg=39.18, stdev= 8.77 00:16:19.442 clat (usec): min=2857, max=21198, avg=11894.54, stdev=2024.65 00:16:19.442 lat (usec): min=2893, max=21245, avg=11933.72, stdev=2025.86 00:16:19.442 clat percentiles (usec): 00:16:19.442 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:16:19.442 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:16:19.442 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14615], 95.00th=[15533], 00:16:19.442 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:16:19.442 | 99.99th=[21103] 00:16:19.442 bw ( KiB/s): min=65088, max=81216, per=92.60%, avg=72920.00, stdev=6814.89, samples=4 00:16:19.442 iops : min= 4068, max= 5076, avg=4557.50, stdev=425.93, samples=4 00:16:19.442 lat (msec) : 2=0.01%, 4=0.71%, 10=54.32%, 20=44.96%, 50=0.01% 00:16:19.442 cpu : usr=78.85%, sys=15.43%, ctx=11, majf=0, minf=1 00:16:19.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:19.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.442 issued rwts: total=17120,9115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.442 00:16:19.442 Run status group 0 (all jobs): 00:16:19.442 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=268MiB (280MB), run=2010-2010msec 00:16:19.442 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=142MiB (149MB), run=1852-1852msec 00:16:19.442 04:19:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.442 04:19:31 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:16:19.442 04:19:31 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:16:19.442 04:19:31 -- host/fio.sh@51 -- # get_nvme_bdfs 00:16:19.442 04:19:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:16:19.442 04:19:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:16:19.442 04:19:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:19.442 04:19:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:19.442 04:19:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:16:19.442 04:19:31 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:16:19.442 04:19:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:19.442 04:19:31 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:16:19.701 Nvme0n1 00:16:19.701 04:19:32 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:16:19.960 04:19:32 -- host/fio.sh@53 -- # ls_guid=df63f42c-8fec-4320-a92e-1515533e6092 00:16:19.960 04:19:32 -- host/fio.sh@54 -- # get_lvs_free_mb df63f42c-8fec-4320-a92e-1515533e6092 00:16:19.960 04:19:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=df63f42c-8fec-4320-a92e-1515533e6092 00:16:19.960 04:19:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:19.960 04:19:32 -- common/autotest_common.sh@1355 -- # local fc 00:16:19.960 04:19:32 -- common/autotest_common.sh@1356 -- # local cs 00:16:19.960 04:19:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:20.219 04:19:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:20.219 { 00:16:20.219 "uuid": "df63f42c-8fec-4320-a92e-1515533e6092", 00:16:20.219 "name": "lvs_0", 00:16:20.219 "base_bdev": "Nvme0n1", 00:16:20.219 "total_data_clusters": 4, 00:16:20.219 "free_clusters": 4, 00:16:20.219 "block_size": 4096, 00:16:20.219 "cluster_size": 1073741824 00:16:20.219 } 00:16:20.219 ]' 00:16:20.219 04:19:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="df63f42c-8fec-4320-a92e-1515533e6092") .free_clusters' 00:16:20.219 04:19:32 -- common/autotest_common.sh@1358 -- # fc=4 00:16:20.219 04:19:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="df63f42c-8fec-4320-a92e-1515533e6092") .cluster_size' 00:16:20.219 04:19:32 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:16:20.219 04:19:32 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:16:20.219 4096 00:16:20.219 04:19:32 -- common/autotest_common.sh@1363 -- # echo 4096 00:16:20.219 04:19:32 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:16:20.787 c302402f-00cd-46ff-b4dc-a95617ecfc1c 00:16:20.787 04:19:33 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:16:20.787 04:19:33 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:16:21.046 04:19:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:21.307 04:19:33 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:21.307 04:19:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:21.307 04:19:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:21.307 04:19:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:21.307 04:19:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:21.307 04:19:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.307 04:19:33 -- common/autotest_common.sh@1330 -- # shift 00:16:21.307 04:19:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:21.307 04:19:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:21.307 04:19:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:21.307 04:19:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:21.307 04:19:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:21.307 04:19:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:21.307 04:19:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:21.307 04:19:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:21.566 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:21.566 fio-3.35 00:16:21.566 Starting 1 thread 00:16:24.095 00:16:24.095 test: (groupid=0, jobs=1): err= 0: pid=82061: Fri Dec 6 04:19:36 2024 00:16:24.095 read: IOPS=6677, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec) 00:16:24.095 slat (nsec): min=1873, max=319509, avg=2479.33, stdev=3945.74 00:16:24.095 clat (usec): min=2969, max=16935, avg=9999.16, stdev=814.35 00:16:24.095 lat (usec): min=2979, max=16938, avg=10001.64, stdev=814.02 00:16:24.095 clat percentiles (usec): 00:16:24.095 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:16:24.095 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:16:24.095 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:16:24.095 | 99.00th=[11863], 99.50th=[12125], 99.90th=[14615], 99.95th=[15795], 00:16:24.095 | 99.99th=[16909] 00:16:24.095 bw ( KiB/s): min=25800, max=27136, per=99.89%, avg=26682.00, stdev=601.06, samples=4 00:16:24.095 iops : min= 6450, max= 6784, avg=6670.50, stdev=150.27, samples=4 00:16:24.095 write: IOPS=6680, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec); 0 zone resets 00:16:24.095 slat (nsec): min=1952, max=238970, avg=2558.96, stdev=2602.87 00:16:24.095 clat (usec): min=2459, max=16999, avg=9070.99, stdev=783.05 00:16:24.095 lat (usec): min=2473, max=17002, avg=9073.55, stdev=782.91 00:16:24.095 clat percentiles (usec): 00:16:24.095 | 1.00th=[ 7373], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8455], 00:16:24.095 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:16:24.095 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:16:24.095 | 99.00th=[10814], 99.50th=[11207], 99.90th=[14353], 99.95th=[15664], 00:16:24.095 | 99.99th=[16909] 00:16:24.095 bw ( KiB/s): min=26432, max=27088, per=99.96%, avg=26710.00, stdev=285.87, samples=4 00:16:24.095 iops : min= 6608, max= 6772, avg=6677.50, stdev=71.47, samples=4 00:16:24.095 lat (msec) : 4=0.06%, 10=70.67%, 20=29.27% 00:16:24.095 cpu : usr=74.74%, sys=19.48%, ctx=529, majf=0, minf=14 00:16:24.095 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:24.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.095 issued rwts: total=13409,13414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.095 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.095 00:16:24.095 Run status group 0 (all jobs): 00:16:24.095 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (54.9MB), run=2008-2008msec 00:16:24.095 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (54.9MB), run=2008-2008msec 00:16:24.095 04:19:36 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:24.095 04:19:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:24.353 04:19:36 -- host/fio.sh@64 -- # ls_nested_guid=a6938c88-3eae-4904-bf68-20576799a63e 00:16:24.353 04:19:36 -- host/fio.sh@65 -- # get_lvs_free_mb a6938c88-3eae-4904-bf68-20576799a63e 00:16:24.353 04:19:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=a6938c88-3eae-4904-bf68-20576799a63e 00:16:24.353 04:19:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:24.353 04:19:36 -- common/autotest_common.sh@1355 -- # local fc 00:16:24.353 04:19:36 -- common/autotest_common.sh@1356 -- # local cs 00:16:24.353 04:19:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.611 04:19:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:24.611 { 00:16:24.611 "uuid": "df63f42c-8fec-4320-a92e-1515533e6092", 00:16:24.611 "name": "lvs_0", 00:16:24.611 "base_bdev": "Nvme0n1", 00:16:24.611 "total_data_clusters": 4, 00:16:24.611 "free_clusters": 0, 00:16:24.611 "block_size": 4096, 00:16:24.611 "cluster_size": 1073741824 00:16:24.611 }, 00:16:24.611 { 00:16:24.611 "uuid": "a6938c88-3eae-4904-bf68-20576799a63e", 00:16:24.611 "name": "lvs_n_0", 00:16:24.611 "base_bdev": "c302402f-00cd-46ff-b4dc-a95617ecfc1c", 00:16:24.611 "total_data_clusters": 1022, 00:16:24.611 "free_clusters": 1022, 00:16:24.611 "block_size": 4096, 00:16:24.611 "cluster_size": 4194304 00:16:24.611 } 00:16:24.611 ]' 00:16:24.611 04:19:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="a6938c88-3eae-4904-bf68-20576799a63e") .free_clusters' 00:16:24.611 04:19:37 -- common/autotest_common.sh@1358 -- # fc=1022 00:16:24.611 04:19:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="a6938c88-3eae-4904-bf68-20576799a63e") .cluster_size' 00:16:24.611 4088 00:16:24.611 04:19:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:24.611 04:19:37 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:16:24.611 04:19:37 -- common/autotest_common.sh@1363 -- # echo 4088 00:16:24.611 04:19:37 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:24.869 ed0d5bcc-df37-4112-8066-74a80a088860 00:16:24.869 04:19:37 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:25.127 04:19:37 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:25.386 04:19:37 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:25.644 04:19:38 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.644 04:19:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.644 04:19:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:25.644 04:19:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:25.644 04:19:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:25.644 04:19:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.644 04:19:38 -- common/autotest_common.sh@1330 -- # shift 00:16:25.644 04:19:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:25.644 04:19:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:25.644 04:19:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:25.644 04:19:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:25.644 04:19:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:25.644 04:19:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:25.644 04:19:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:25.644 04:19:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.902 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:25.902 fio-3.35 00:16:25.902 Starting 1 thread 00:16:28.436 00:16:28.436 test: (groupid=0, jobs=1): err= 0: pid=82145: Fri Dec 6 04:19:40 2024 00:16:28.436 read: IOPS=5906, BW=23.1MiB/s (24.2MB/s)(46.4MiB/2009msec) 00:16:28.436 slat (nsec): min=1937, max=323370, avg=2703.00, stdev=4063.15 00:16:28.436 clat (usec): min=3282, max=19244, avg=11336.51, stdev=951.42 00:16:28.436 lat (usec): min=3292, max=19246, avg=11339.21, stdev=951.07 00:16:28.436 clat percentiles (usec): 00:16:28.436 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:16:28.436 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:16:28.436 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:16:28.436 | 99.00th=[13435], 99.50th=[13829], 99.90th=[17957], 99.95th=[19006], 00:16:28.436 | 99.99th=[19268] 00:16:28.436 bw ( KiB/s): min=23048, max=23912, per=99.94%, avg=23612.00, stdev=390.91, samples=4 00:16:28.436 iops : min= 5762, max= 5978, avg=5903.00, stdev=97.73, samples=4 00:16:28.436 write: IOPS=5904, BW=23.1MiB/s (24.2MB/s)(46.3MiB/2009msec); 0 zone resets 00:16:28.436 slat (nsec): min=1965, max=287951, avg=2743.57, stdev=3197.12 00:16:28.436 clat (usec): min=2438, max=19838, avg=10259.14, stdev=903.03 00:16:28.436 lat (usec): min=2452, max=19840, avg=10261.89, stdev=902.88 00:16:28.436 clat percentiles (usec): 00:16:28.436 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:16:28.436 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:16:28.436 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:16:28.436 | 99.00th=[12256], 99.50th=[12518], 99.90th=[16319], 99.95th=[19006], 00:16:28.436 | 99.99th=[19792] 00:16:28.436 bw ( KiB/s): min=23448, max=23944, per=99.88%, avg=23592.00, stdev=235.42, samples=4 00:16:28.436 iops : min= 5862, max= 5986, avg=5898.00, stdev=58.86, samples=4 00:16:28.436 lat (msec) : 4=0.05%, 10=21.69%, 20=78.25% 00:16:28.436 cpu : usr=74.85%, sys=20.02%, ctx=8, majf=0, minf=14 00:16:28.436 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:28.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.436 issued rwts: total=11866,11863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.436 00:16:28.436 Run status group 0 (all jobs): 00:16:28.436 READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.4MiB (48.6MB), run=2009-2009msec 00:16:28.436 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=46.3MiB (48.6MB), run=2009-2009msec 00:16:28.436 04:19:40 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:28.436 04:19:40 -- host/fio.sh@74 -- # sync 00:16:28.436 04:19:40 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:28.694 04:19:41 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:29.261 04:19:41 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:29.261 04:19:41 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:29.520 04:19:42 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:30.457 04:19:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:30.457 04:19:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:30.457 04:19:42 -- host/fio.sh@86 -- # nvmftestfini 00:16:30.457 04:19:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:30.457 04:19:42 -- nvmf/common.sh@116 -- # sync 00:16:30.457 04:19:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:30.457 04:19:42 -- nvmf/common.sh@119 -- # set +e 00:16:30.457 04:19:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:30.457 04:19:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:30.457 rmmod nvme_tcp 00:16:30.457 rmmod nvme_fabrics 00:16:30.457 rmmod nvme_keyring 00:16:30.457 04:19:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.457 04:19:42 -- nvmf/common.sh@123 -- # set -e 00:16:30.457 04:19:42 -- nvmf/common.sh@124 -- # return 0 00:16:30.457 04:19:42 -- nvmf/common.sh@477 -- # '[' -n 81825 ']' 00:16:30.457 04:19:42 -- nvmf/common.sh@478 -- # killprocess 81825 00:16:30.457 04:19:42 -- common/autotest_common.sh@936 -- # '[' -z 81825 ']' 00:16:30.457 04:19:42 -- common/autotest_common.sh@940 -- # kill -0 81825 00:16:30.457 04:19:42 -- common/autotest_common.sh@941 -- # uname 00:16:30.457 04:19:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.457 04:19:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81825 00:16:30.715 killing process with pid 81825 00:16:30.715 04:19:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.715 04:19:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.715 04:19:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81825' 00:16:30.715 04:19:43 -- common/autotest_common.sh@955 -- # kill 81825 00:16:30.715 04:19:43 -- common/autotest_common.sh@960 -- # wait 81825 00:16:30.715 04:19:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:30.715 04:19:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:30.715 04:19:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:30.715 04:19:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.716 04:19:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:30.716 04:19:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.716 04:19:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.716 04:19:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.716 04:19:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:30.716 00:16:30.716 real 0m19.838s 00:16:30.716 user 1m27.335s 00:16:30.716 sys 0m4.439s 00:16:30.716 04:19:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:30.716 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:30.716 ************************************ 00:16:30.716 END TEST nvmf_fio_host 00:16:30.716 ************************************ 00:16:30.975 04:19:43 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:30.975 04:19:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:30.975 04:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:30.975 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:30.975 ************************************ 00:16:30.975 START TEST nvmf_failover 00:16:30.975 ************************************ 00:16:30.975 04:19:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:30.975 * Looking for test storage... 00:16:30.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.975 04:19:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:30.975 04:19:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:30.975 04:19:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:30.975 04:19:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:30.975 04:19:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:30.975 04:19:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:30.975 04:19:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:30.975 04:19:43 -- scripts/common.sh@335 -- # IFS=.-: 00:16:30.975 04:19:43 -- scripts/common.sh@335 -- # read -ra ver1 00:16:30.975 04:19:43 -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.975 04:19:43 -- scripts/common.sh@336 -- # read -ra ver2 00:16:30.975 04:19:43 -- scripts/common.sh@337 -- # local 'op=<' 00:16:30.975 04:19:43 -- scripts/common.sh@339 -- # ver1_l=2 00:16:30.975 04:19:43 -- scripts/common.sh@340 -- # ver2_l=1 00:16:30.975 04:19:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:30.975 04:19:43 -- scripts/common.sh@343 -- # case "$op" in 00:16:30.975 04:19:43 -- scripts/common.sh@344 -- # : 1 00:16:30.975 04:19:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:30.975 04:19:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.975 04:19:43 -- scripts/common.sh@364 -- # decimal 1 00:16:30.975 04:19:43 -- scripts/common.sh@352 -- # local d=1 00:16:30.975 04:19:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.975 04:19:43 -- scripts/common.sh@354 -- # echo 1 00:16:30.975 04:19:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:30.975 04:19:43 -- scripts/common.sh@365 -- # decimal 2 00:16:30.975 04:19:43 -- scripts/common.sh@352 -- # local d=2 00:16:30.975 04:19:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.975 04:19:43 -- scripts/common.sh@354 -- # echo 2 00:16:30.975 04:19:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:30.975 04:19:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:30.975 04:19:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:30.975 04:19:43 -- scripts/common.sh@367 -- # return 0 00:16:30.975 04:19:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.975 04:19:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.975 --rc genhtml_branch_coverage=1 00:16:30.975 --rc genhtml_function_coverage=1 00:16:30.975 --rc genhtml_legend=1 00:16:30.975 --rc geninfo_all_blocks=1 00:16:30.975 --rc geninfo_unexecuted_blocks=1 00:16:30.975 00:16:30.975 ' 00:16:30.975 04:19:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.975 --rc genhtml_branch_coverage=1 00:16:30.975 --rc genhtml_function_coverage=1 00:16:30.975 --rc genhtml_legend=1 00:16:30.975 --rc geninfo_all_blocks=1 00:16:30.975 --rc geninfo_unexecuted_blocks=1 00:16:30.975 00:16:30.975 ' 00:16:30.975 04:19:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:30.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.975 --rc genhtml_branch_coverage=1 00:16:30.976 --rc genhtml_function_coverage=1 00:16:30.976 --rc genhtml_legend=1 00:16:30.976 --rc geninfo_all_blocks=1 00:16:30.976 --rc geninfo_unexecuted_blocks=1 00:16:30.976 00:16:30.976 ' 00:16:30.976 04:19:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.976 --rc genhtml_branch_coverage=1 00:16:30.976 --rc genhtml_function_coverage=1 00:16:30.976 --rc genhtml_legend=1 00:16:30.976 --rc geninfo_all_blocks=1 00:16:30.976 --rc geninfo_unexecuted_blocks=1 00:16:30.976 00:16:30.976 ' 00:16:30.976 04:19:43 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.976 04:19:43 -- nvmf/common.sh@7 -- # uname -s 00:16:30.976 04:19:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.976 04:19:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.976 04:19:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.976 04:19:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.976 04:19:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.976 04:19:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.976 04:19:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.976 04:19:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.976 04:19:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.976 04:19:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:16:30.976 04:19:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:16:30.976 04:19:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.976 04:19:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.976 04:19:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.976 04:19:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.976 04:19:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.976 04:19:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.976 04:19:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.976 04:19:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.976 04:19:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.976 04:19:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.976 04:19:43 -- paths/export.sh@5 -- # export PATH 00:16:30.976 04:19:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.976 04:19:43 -- nvmf/common.sh@46 -- # : 0 00:16:30.976 04:19:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.976 04:19:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.976 04:19:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.976 04:19:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.976 04:19:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.976 04:19:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.976 04:19:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.976 04:19:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.976 04:19:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.976 04:19:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.976 04:19:43 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.976 04:19:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.976 04:19:43 -- host/failover.sh@18 -- # nvmftestinit 00:16:30.976 04:19:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.976 04:19:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.976 04:19:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.976 04:19:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.976 04:19:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.976 04:19:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.976 04:19:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.976 04:19:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.976 04:19:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:30.976 04:19:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:30.976 04:19:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.976 04:19:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.976 04:19:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.976 04:19:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:30.976 04:19:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.976 04:19:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.976 04:19:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.976 04:19:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.976 04:19:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.976 04:19:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.976 04:19:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.976 04:19:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.976 04:19:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:30.976 04:19:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:30.976 Cannot find device "nvmf_tgt_br" 00:16:31.235 04:19:43 -- nvmf/common.sh@154 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:31.235 Cannot find device "nvmf_tgt_br2" 00:16:31.235 04:19:43 -- nvmf/common.sh@155 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:31.235 04:19:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:31.235 Cannot find device "nvmf_tgt_br" 00:16:31.235 04:19:43 -- nvmf/common.sh@157 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:31.235 Cannot find device "nvmf_tgt_br2" 00:16:31.235 04:19:43 -- nvmf/common.sh@158 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:31.235 04:19:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:31.235 04:19:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:31.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.235 04:19:43 -- nvmf/common.sh@161 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:31.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:31.235 04:19:43 -- nvmf/common.sh@162 -- # true 00:16:31.235 04:19:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:31.235 04:19:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:31.235 04:19:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:31.235 04:19:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:31.235 04:19:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:31.235 04:19:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:31.235 04:19:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:31.235 04:19:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:31.235 04:19:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:31.235 04:19:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:31.235 04:19:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:31.235 04:19:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:31.235 04:19:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:31.235 04:19:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:31.235 04:19:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:31.235 04:19:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:31.235 04:19:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:31.235 04:19:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:31.235 04:19:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.235 04:19:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.235 04:19:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.235 04:19:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.235 04:19:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.235 04:19:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:31.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:31.235 00:16:31.236 --- 10.0.0.2 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:31.236 04:19:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:31.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:31.236 00:16:31.236 --- 10.0.0.3 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:31.236 04:19:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:31.236 00:16:31.236 --- 10.0.0.1 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:31.236 04:19:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.236 04:19:43 -- nvmf/common.sh@421 -- # return 0 00:16:31.236 04:19:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.236 04:19:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.236 04:19:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.236 04:19:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.236 04:19:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.236 04:19:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.236 04:19:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.494 04:19:43 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:31.494 04:19:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.494 04:19:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:31.494 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 04:19:43 -- nvmf/common.sh@469 -- # nvmfpid=82393 00:16:31.494 04:19:43 -- nvmf/common.sh@470 -- # waitforlisten 82393 00:16:31.494 04:19:43 -- common/autotest_common.sh@829 -- # '[' -z 82393 ']' 00:16:31.494 04:19:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:31.494 04:19:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.494 04:19:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.494 04:19:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.494 04:19:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.494 04:19:43 -- common/autotest_common.sh@10 -- # set +x 00:16:31.494 [2024-12-06 04:19:43.873481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:31.494 [2024-12-06 04:19:43.873572] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.494 [2024-12-06 04:19:44.015305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.753 [2024-12-06 04:19:44.104362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.753 [2024-12-06 04:19:44.104763] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.753 [2024-12-06 04:19:44.104816] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.753 [2024-12-06 04:19:44.105047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.753 [2024-12-06 04:19:44.105242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.753 [2024-12-06 04:19:44.105414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.753 [2024-12-06 04:19:44.105475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.366 04:19:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.366 04:19:44 -- common/autotest_common.sh@862 -- # return 0 00:16:32.366 04:19:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.367 04:19:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:32.367 04:19:44 -- common/autotest_common.sh@10 -- # set +x 00:16:32.367 04:19:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.367 04:19:44 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.627 [2024-12-06 04:19:45.089044] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.627 04:19:45 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:32.885 Malloc0 00:16:32.885 04:19:45 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:33.143 04:19:45 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.401 04:19:45 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.658 [2024-12-06 04:19:46.086573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.658 04:19:46 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:33.916 [2024-12-06 04:19:46.326743] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:33.916 04:19:46 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:34.175 [2024-12-06 04:19:46.663240] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:34.175 04:19:46 -- host/failover.sh@31 -- # bdevperf_pid=82451 00:16:34.175 04:19:46 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.175 04:19:46 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:34.175 04:19:46 -- host/failover.sh@34 -- # waitforlisten 82451 /var/tmp/bdevperf.sock 00:16:34.175 04:19:46 -- common/autotest_common.sh@829 -- # '[' -z 82451 ']' 00:16:34.175 04:19:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.175 04:19:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.175 04:19:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.175 04:19:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.175 04:19:46 -- common/autotest_common.sh@10 -- # set +x 00:16:35.113 04:19:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.113 04:19:47 -- common/autotest_common.sh@862 -- # return 0 00:16:35.113 04:19:47 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:35.682 NVMe0n1 00:16:35.682 04:19:47 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:35.940 00:16:35.940 04:19:48 -- host/failover.sh@39 -- # run_test_pid=82480 00:16:35.940 04:19:48 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:35.940 04:19:48 -- host/failover.sh@41 -- # sleep 1 00:16:36.877 04:19:49 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.136 [2024-12-06 04:19:49.599365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 [2024-12-06 04:19:49.599517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5a240 is same with the state(5) to be set 00:16:37.136 04:19:49 -- host/failover.sh@45 -- # sleep 3 00:16:40.453 04:19:52 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:40.453 00:16:40.453 04:19:52 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:40.712 [2024-12-06 04:19:53.231620] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 [2024-12-06 04:19:53.231876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ae50 is same with the state(5) to be set 00:16:40.712 04:19:53 -- host/failover.sh@50 -- # sleep 3 00:16:43.998 04:19:56 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.998 [2024-12-06 04:19:56.516758] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.998 04:19:56 -- host/failover.sh@55 -- # sleep 1 00:16:45.376 04:19:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:45.376 [2024-12-06 04:19:57.786606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 [2024-12-06 04:19:57.786800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fe550 is same with the state(5) to be set 00:16:45.376 04:19:57 -- host/failover.sh@59 -- # wait 82480 00:16:51.949 0 00:16:51.949 04:20:03 -- host/failover.sh@61 -- # killprocess 82451 00:16:51.949 04:20:03 -- common/autotest_common.sh@936 -- # '[' -z 82451 ']' 00:16:51.949 04:20:03 -- common/autotest_common.sh@940 -- # kill -0 82451 00:16:51.949 04:20:03 -- common/autotest_common.sh@941 -- # uname 00:16:51.949 04:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.949 04:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82451 00:16:51.949 killing process with pid 82451 00:16:51.949 04:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:51.949 04:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:51.949 04:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82451' 00:16:51.949 04:20:03 -- common/autotest_common.sh@955 -- # kill 82451 00:16:51.949 04:20:03 -- common/autotest_common.sh@960 -- # wait 82451 00:16:51.949 04:20:03 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:51.949 [2024-12-06 04:19:46.725659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:51.949 [2024-12-06 04:19:46.725751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82451 ] 00:16:51.949 [2024-12-06 04:19:46.855930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.949 [2024-12-06 04:19:46.932054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.949 Running I/O for 15 seconds... 00:16:51.949 [2024-12-06 04:19:49.599570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.599940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.599957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.599972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.949 [2024-12-06 04:19:49.600592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.949 [2024-12-06 04:19:49.600683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.949 [2024-12-06 04:19:49.600699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.600986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:129224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.601854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.950 [2024-12-06 04:19:49.601973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.601989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.950 [2024-12-06 04:19:49.602180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.950 [2024-12-06 04:19:49.602195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.602741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.602981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.602997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.951 [2024-12-06 04:19:49.603589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.951 [2024-12-06 04:19:49.603739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.951 [2024-12-06 04:19:49.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:49.603770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.603793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:49.603809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.603825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:49.603839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.603855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:49.603869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.603884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa337e0 is same with the state(5) to be set 00:16:51.952 [2024-12-06 04:19:49.603902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:51.952 [2024-12-06 04:19:49.603918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:51.952 [2024-12-06 04:19:49.603930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129624 len:8 PRP1 0x0 PRP2 0x0 00:16:51.952 [2024-12-06 04:19:49.603944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.604003] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa337e0 was disconnected and freed. reset controller. 00:16:51.952 [2024-12-06 04:19:49.604029] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:51.952 [2024-12-06 04:19:49.604096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:49.604118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.604134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:49.604163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.604177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:49.604189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.604202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:49.604215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:49.604228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.952 [2024-12-06 04:19:49.604271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa36820 (9): Bad file descriptor 00:16:51.952 [2024-12-06 04:19:49.606718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:51.952 [2024-12-06 04:19:49.634831] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.952 [2024-12-06 04:19:53.231195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:53.231267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.231312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:53.231328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.231342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:53.231356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.231369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.952 [2024-12-06 04:19:53.231394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.231411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa36820 is same with the state(5) to be set 00:16:51.952 [2024-12-06 04:19:53.231928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.231957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.231999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.952 [2024-12-06 04:19:53.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.952 [2024-12-06 04:19:53.232838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.952 [2024-12-06 04:19:53.232853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.232867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.232883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.232897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.232912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.232926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.232942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.232956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.232971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.232985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.233956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.233986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.234002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.234015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.234032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.234046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.234061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.953 [2024-12-06 04:19:53.234076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.234092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.234107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.953 [2024-12-06 04:19:53.234123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.953 [2024-12-06 04:19:53.234138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.234955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.234984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.234999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.235348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.235382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.235428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.235537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.954 [2024-12-06 04:19:53.235713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.954 [2024-12-06 04:19:53.235734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.954 [2024-12-06 04:19:53.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:53.235854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:53.235884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.235913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.235943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.235973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.235996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.236011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.236027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.236040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.236056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.236070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.236086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:53.236100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.236115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa34440 is same with the state(5) to be set 00:16:51.955 [2024-12-06 04:19:53.236131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:51.955 [2024-12-06 04:19:53.236142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:51.955 [2024-12-06 04:19:53.236153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:8 PRP1 0x0 PRP2 0x0 00:16:51.955 [2024-12-06 04:19:53.236167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:53.236225] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa34440 was disconnected and freed. reset controller. 00:16:51.955 [2024-12-06 04:19:53.236244] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:51.955 [2024-12-06 04:19:53.236259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.955 [2024-12-06 04:19:53.238529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:51.955 [2024-12-06 04:19:53.238569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa36820 (9): Bad file descriptor 00:16:51.955 [2024-12-06 04:19:53.267624] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.955 [2024-12-06 04:19:57.786860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.786928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.786988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:57.787608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:57.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:57.787668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:57.787698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.955 [2024-12-06 04:19:57.787728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.955 [2024-12-06 04:19:57.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.955 [2024-12-06 04:19:57.787988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.788965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.788981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.788995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.789071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.789165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.956 [2024-12-06 04:19:57.789205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.956 [2024-12-06 04:19:57.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.956 [2024-12-06 04:19:57.789328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.789861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.789968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.789982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.957 [2024-12-06 04:19:57.790806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.957 [2024-12-06 04:19:57.790844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.957 [2024-12-06 04:19:57.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.790877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.790893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.958 [2024-12-06 04:19:57.790908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.790939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.790953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.790968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.958 [2024-12-06 04:19:57.790982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.790998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.791012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.791042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.791114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.958 [2024-12-06 04:19:57.791145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5ab00 is same with the state(5) to be set 00:16:51.958 [2024-12-06 04:19:57.791178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:51.958 [2024-12-06 04:19:57.791190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:51.958 [2024-12-06 04:19:57.791202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127352 len:8 PRP1 0x0 PRP2 0x0 00:16:51.958 [2024-12-06 04:19:57.791215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791274] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa5ab00 was disconnected and freed. reset controller. 00:16:51.958 [2024-12-06 04:19:57.791292] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:51.958 [2024-12-06 04:19:57.791359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.958 [2024-12-06 04:19:57.791428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.958 [2024-12-06 04:19:57.791490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.958 [2024-12-06 04:19:57.791517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.958 [2024-12-06 04:19:57.791544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.958 [2024-12-06 04:19:57.791558] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.958 [2024-12-06 04:19:57.791635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa36820 (9): Bad file descriptor 00:16:51.958 [2024-12-06 04:19:57.794255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:51.958 [2024-12-06 04:19:57.829308] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.958 00:16:51.958 Latency(us) 00:16:51.958 [2024-12-06T04:20:04.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.958 [2024-12-06T04:20:04.523Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:51.958 Verification LBA range: start 0x0 length 0x4000 00:16:51.958 NVMe0n1 : 15.01 13657.80 53.35 303.65 0.00 9150.27 446.84 13226.36 00:16:51.958 [2024-12-06T04:20:04.523Z] =================================================================================================================== 00:16:51.958 [2024-12-06T04:20:04.523Z] Total : 13657.80 53.35 303.65 0.00 9150.27 446.84 13226.36 00:16:51.958 Received shutdown signal, test time was about 15.000000 seconds 00:16:51.958 00:16:51.958 Latency(us) 00:16:51.958 [2024-12-06T04:20:04.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.958 [2024-12-06T04:20:04.523Z] =================================================================================================================== 00:16:51.958 [2024-12-06T04:20:04.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.958 04:20:03 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:51.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.958 04:20:03 -- host/failover.sh@65 -- # count=3 00:16:51.958 04:20:03 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:51.958 04:20:03 -- host/failover.sh@73 -- # bdevperf_pid=82657 00:16:51.958 04:20:03 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:51.958 04:20:03 -- host/failover.sh@75 -- # waitforlisten 82657 /var/tmp/bdevperf.sock 00:16:51.958 04:20:03 -- common/autotest_common.sh@829 -- # '[' -z 82657 ']' 00:16:51.958 04:20:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.958 04:20:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.958 04:20:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.958 04:20:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.958 04:20:03 -- common/autotest_common.sh@10 -- # set +x 00:16:52.216 04:20:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.216 04:20:04 -- common/autotest_common.sh@862 -- # return 0 00:16:52.216 04:20:04 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:52.474 [2024-12-06 04:20:04.999675] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:52.474 04:20:05 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:52.732 [2024-12-06 04:20:05.215827] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:52.732 04:20:05 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:52.990 NVMe0n1 00:16:52.990 04:20:05 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.247 00:16:53.505 04:20:05 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.764 00:16:53.764 04:20:06 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:53.764 04:20:06 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:54.021 04:20:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:54.281 04:20:06 -- host/failover.sh@87 -- # sleep 3 00:16:57.568 04:20:09 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:57.568 04:20:09 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:57.568 04:20:09 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.568 04:20:09 -- host/failover.sh@90 -- # run_test_pid=82738 00:16:57.568 04:20:09 -- host/failover.sh@92 -- # wait 82738 00:16:58.506 0 00:16:58.506 04:20:11 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:58.506 [2024-12-06 04:20:03.755616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:58.506 [2024-12-06 04:20:03.755740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82657 ] 00:16:58.506 [2024-12-06 04:20:03.888395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.506 [2024-12-06 04:20:03.967431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.506 [2024-12-06 04:20:06.588032] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:58.506 [2024-12-06 04:20:06.588162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.506 [2024-12-06 04:20:06.588189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.506 [2024-12-06 04:20:06.588209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.506 [2024-12-06 04:20:06.588229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.506 [2024-12-06 04:20:06.588244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.506 [2024-12-06 04:20:06.588258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.506 [2024-12-06 04:20:06.588273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.506 [2024-12-06 04:20:06.588287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.506 [2024-12-06 04:20:06.588301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:58.506 [2024-12-06 04:20:06.588358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:58.506 [2024-12-06 04:20:06.588406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ff820 (9): Bad file descriptor 00:16:58.506 [2024-12-06 04:20:06.597933] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:58.506 Running I/O for 1 seconds... 00:16:58.506 00:16:58.506 Latency(us) 00:16:58.506 [2024-12-06T04:20:11.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.506 [2024-12-06T04:20:11.071Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:58.506 Verification LBA range: start 0x0 length 0x4000 00:16:58.506 NVMe0n1 : 1.01 13870.00 54.18 0.00 0.00 9181.23 793.13 14358.34 00:16:58.506 [2024-12-06T04:20:11.071Z] =================================================================================================================== 00:16:58.506 [2024-12-06T04:20:11.071Z] Total : 13870.00 54.18 0.00 0.00 9181.23 793.13 14358.34 00:16:58.506 04:20:11 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:58.506 04:20:11 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:58.766 04:20:11 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:59.334 04:20:11 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:59.335 04:20:11 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:59.335 04:20:11 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:59.594 04:20:12 -- host/failover.sh@101 -- # sleep 3 00:17:02.884 04:20:15 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:02.884 04:20:15 -- host/failover.sh@103 -- # grep -q NVMe0 00:17:02.884 04:20:15 -- host/failover.sh@108 -- # killprocess 82657 00:17:02.884 04:20:15 -- common/autotest_common.sh@936 -- # '[' -z 82657 ']' 00:17:02.884 04:20:15 -- common/autotest_common.sh@940 -- # kill -0 82657 00:17:02.884 04:20:15 -- common/autotest_common.sh@941 -- # uname 00:17:02.884 04:20:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.884 04:20:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82657 00:17:02.884 killing process with pid 82657 00:17:02.884 04:20:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.884 04:20:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.884 04:20:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82657' 00:17:02.884 04:20:15 -- common/autotest_common.sh@955 -- # kill 82657 00:17:02.884 04:20:15 -- common/autotest_common.sh@960 -- # wait 82657 00:17:03.144 04:20:15 -- host/failover.sh@110 -- # sync 00:17:03.144 04:20:15 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.402 04:20:15 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:03.402 04:20:15 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:03.402 04:20:15 -- host/failover.sh@116 -- # nvmftestfini 00:17:03.402 04:20:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:03.402 04:20:15 -- nvmf/common.sh@116 -- # sync 00:17:03.402 04:20:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:03.402 04:20:15 -- nvmf/common.sh@119 -- # set +e 00:17:03.402 04:20:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:03.402 04:20:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:03.660 rmmod nvme_tcp 00:17:03.660 rmmod nvme_fabrics 00:17:03.660 rmmod nvme_keyring 00:17:03.660 04:20:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:03.660 04:20:16 -- nvmf/common.sh@123 -- # set -e 00:17:03.660 04:20:16 -- nvmf/common.sh@124 -- # return 0 00:17:03.660 04:20:16 -- nvmf/common.sh@477 -- # '[' -n 82393 ']' 00:17:03.660 04:20:16 -- nvmf/common.sh@478 -- # killprocess 82393 00:17:03.660 04:20:16 -- common/autotest_common.sh@936 -- # '[' -z 82393 ']' 00:17:03.660 04:20:16 -- common/autotest_common.sh@940 -- # kill -0 82393 00:17:03.660 04:20:16 -- common/autotest_common.sh@941 -- # uname 00:17:03.660 04:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.660 04:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82393 00:17:03.660 killing process with pid 82393 00:17:03.660 04:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:03.660 04:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:03.660 04:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82393' 00:17:03.660 04:20:16 -- common/autotest_common.sh@955 -- # kill 82393 00:17:03.660 04:20:16 -- common/autotest_common.sh@960 -- # wait 82393 00:17:03.919 04:20:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:03.919 04:20:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:03.919 04:20:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:03.919 04:20:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.919 04:20:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:03.919 04:20:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.919 04:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.919 04:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.919 04:20:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:03.919 00:17:03.919 real 0m32.993s 00:17:03.919 user 2m7.900s 00:17:03.919 sys 0m5.626s 00:17:03.919 04:20:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:03.919 04:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.919 ************************************ 00:17:03.919 END TEST nvmf_failover 00:17:03.919 ************************************ 00:17:03.919 04:20:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:03.919 04:20:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.919 04:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.919 04:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:03.919 ************************************ 00:17:03.919 START TEST nvmf_discovery 00:17:03.919 ************************************ 00:17:03.919 04:20:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:03.919 * Looking for test storage... 00:17:03.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:03.920 04:20:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.920 04:20:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.920 04:20:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:04.179 04:20:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:04.179 04:20:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:04.179 04:20:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:04.179 04:20:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:04.179 04:20:16 -- scripts/common.sh@335 -- # IFS=.-: 00:17:04.179 04:20:16 -- scripts/common.sh@335 -- # read -ra ver1 00:17:04.179 04:20:16 -- scripts/common.sh@336 -- # IFS=.-: 00:17:04.179 04:20:16 -- scripts/common.sh@336 -- # read -ra ver2 00:17:04.179 04:20:16 -- scripts/common.sh@337 -- # local 'op=<' 00:17:04.179 04:20:16 -- scripts/common.sh@339 -- # ver1_l=2 00:17:04.179 04:20:16 -- scripts/common.sh@340 -- # ver2_l=1 00:17:04.179 04:20:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:04.179 04:20:16 -- scripts/common.sh@343 -- # case "$op" in 00:17:04.179 04:20:16 -- scripts/common.sh@344 -- # : 1 00:17:04.179 04:20:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:04.179 04:20:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:04.179 04:20:16 -- scripts/common.sh@364 -- # decimal 1 00:17:04.179 04:20:16 -- scripts/common.sh@352 -- # local d=1 00:17:04.179 04:20:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:04.179 04:20:16 -- scripts/common.sh@354 -- # echo 1 00:17:04.179 04:20:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:04.179 04:20:16 -- scripts/common.sh@365 -- # decimal 2 00:17:04.179 04:20:16 -- scripts/common.sh@352 -- # local d=2 00:17:04.179 04:20:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:04.179 04:20:16 -- scripts/common.sh@354 -- # echo 2 00:17:04.179 04:20:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:04.179 04:20:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:04.179 04:20:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:04.179 04:20:16 -- scripts/common.sh@367 -- # return 0 00:17:04.179 04:20:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:04.179 04:20:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.179 --rc genhtml_branch_coverage=1 00:17:04.179 --rc genhtml_function_coverage=1 00:17:04.179 --rc genhtml_legend=1 00:17:04.179 --rc geninfo_all_blocks=1 00:17:04.179 --rc geninfo_unexecuted_blocks=1 00:17:04.179 00:17:04.179 ' 00:17:04.179 04:20:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.179 --rc genhtml_branch_coverage=1 00:17:04.179 --rc genhtml_function_coverage=1 00:17:04.179 --rc genhtml_legend=1 00:17:04.179 --rc geninfo_all_blocks=1 00:17:04.179 --rc geninfo_unexecuted_blocks=1 00:17:04.179 00:17:04.179 ' 00:17:04.179 04:20:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.179 --rc genhtml_branch_coverage=1 00:17:04.179 --rc genhtml_function_coverage=1 00:17:04.179 --rc genhtml_legend=1 00:17:04.179 --rc geninfo_all_blocks=1 00:17:04.179 --rc geninfo_unexecuted_blocks=1 00:17:04.179 00:17:04.179 ' 00:17:04.179 04:20:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:04.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:04.179 --rc genhtml_branch_coverage=1 00:17:04.179 --rc genhtml_function_coverage=1 00:17:04.179 --rc genhtml_legend=1 00:17:04.179 --rc geninfo_all_blocks=1 00:17:04.179 --rc geninfo_unexecuted_blocks=1 00:17:04.179 00:17:04.179 ' 00:17:04.179 04:20:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.179 04:20:16 -- nvmf/common.sh@7 -- # uname -s 00:17:04.179 04:20:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.179 04:20:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.179 04:20:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.179 04:20:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.179 04:20:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.179 04:20:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.179 04:20:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.179 04:20:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.179 04:20:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.179 04:20:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.179 04:20:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:04.179 04:20:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:04.179 04:20:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.179 04:20:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.179 04:20:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.179 04:20:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.179 04:20:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.179 04:20:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.179 04:20:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.179 04:20:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.179 04:20:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.180 04:20:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.180 04:20:16 -- paths/export.sh@5 -- # export PATH 00:17:04.180 04:20:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.180 04:20:16 -- nvmf/common.sh@46 -- # : 0 00:17:04.180 04:20:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:04.180 04:20:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:04.180 04:20:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:04.180 04:20:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.180 04:20:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.180 04:20:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:04.180 04:20:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:04.180 04:20:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:04.180 04:20:16 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:04.180 04:20:16 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:04.180 04:20:16 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:04.180 04:20:16 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:04.180 04:20:16 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:04.180 04:20:16 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:04.180 04:20:16 -- host/discovery.sh@25 -- # nvmftestinit 00:17:04.180 04:20:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:04.180 04:20:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.180 04:20:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:04.180 04:20:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:04.180 04:20:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:04.180 04:20:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.180 04:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.180 04:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.180 04:20:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:04.180 04:20:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:04.180 04:20:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:04.180 04:20:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:04.180 04:20:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:04.180 04:20:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:04.180 04:20:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.180 04:20:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.180 04:20:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.180 04:20:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:04.180 04:20:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.180 04:20:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.180 04:20:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.180 04:20:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.180 04:20:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.180 04:20:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.180 04:20:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.180 04:20:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.180 04:20:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:04.180 04:20:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:04.180 Cannot find device "nvmf_tgt_br" 00:17:04.180 04:20:16 -- nvmf/common.sh@154 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.180 Cannot find device "nvmf_tgt_br2" 00:17:04.180 04:20:16 -- nvmf/common.sh@155 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:04.180 04:20:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:04.180 Cannot find device "nvmf_tgt_br" 00:17:04.180 04:20:16 -- nvmf/common.sh@157 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:04.180 Cannot find device "nvmf_tgt_br2" 00:17:04.180 04:20:16 -- nvmf/common.sh@158 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:04.180 04:20:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:04.180 04:20:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.180 04:20:16 -- nvmf/common.sh@161 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.180 04:20:16 -- nvmf/common.sh@162 -- # true 00:17:04.180 04:20:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.180 04:20:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.180 04:20:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.180 04:20:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.180 04:20:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.180 04:20:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.180 04:20:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.439 04:20:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.439 04:20:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.439 04:20:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:04.439 04:20:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:04.439 04:20:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:04.439 04:20:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:04.439 04:20:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.439 04:20:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.439 04:20:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.439 04:20:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:04.439 04:20:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:04.439 04:20:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.439 04:20:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.439 04:20:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.439 04:20:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.439 04:20:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.439 04:20:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:04.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:04.439 00:17:04.439 --- 10.0.0.2 ping statistics --- 00:17:04.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.439 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:04.439 04:20:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:04.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:04.439 00:17:04.439 --- 10.0.0.3 ping statistics --- 00:17:04.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.439 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:04.439 04:20:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:04.439 00:17:04.439 --- 10.0.0.1 ping statistics --- 00:17:04.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.439 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:04.439 04:20:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.439 04:20:16 -- nvmf/common.sh@421 -- # return 0 00:17:04.439 04:20:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:04.439 04:20:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.439 04:20:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:04.439 04:20:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:04.439 04:20:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.439 04:20:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:04.439 04:20:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:04.439 04:20:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:04.439 04:20:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:04.439 04:20:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.439 04:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:04.439 04:20:16 -- nvmf/common.sh@469 -- # nvmfpid=83017 00:17:04.439 04:20:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.439 04:20:16 -- nvmf/common.sh@470 -- # waitforlisten 83017 00:17:04.439 04:20:16 -- common/autotest_common.sh@829 -- # '[' -z 83017 ']' 00:17:04.439 04:20:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.439 04:20:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.439 04:20:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.439 04:20:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.439 04:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:04.439 [2024-12-06 04:20:16.934572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.439 [2024-12-06 04:20:16.934658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.697 [2024-12-06 04:20:17.076068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.697 [2024-12-06 04:20:17.148663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.697 [2024-12-06 04:20:17.148836] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.697 [2024-12-06 04:20:17.148848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.697 [2024-12-06 04:20:17.148857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.697 [2024-12-06 04:20:17.148886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.632 04:20:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.632 04:20:17 -- common/autotest_common.sh@862 -- # return 0 00:17:05.632 04:20:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.632 04:20:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.632 04:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 04:20:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.632 04:20:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.632 04:20:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.632 04:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 [2024-12-06 04:20:18.000115] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.632 04:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.632 04:20:18 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:05.632 04:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.632 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 [2024-12-06 04:20:18.008226] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:05.632 04:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.632 04:20:18 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:05.632 04:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.632 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 null0 00:17:05.632 04:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.632 04:20:18 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:05.632 04:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.632 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 null1 00:17:05.632 04:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.632 04:20:18 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:05.632 04:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.632 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 04:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.632 04:20:18 -- host/discovery.sh@45 -- # hostpid=83050 00:17:05.632 04:20:18 -- host/discovery.sh@46 -- # waitforlisten 83050 /tmp/host.sock 00:17:05.632 04:20:18 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:05.632 04:20:18 -- common/autotest_common.sh@829 -- # '[' -z 83050 ']' 00:17:05.632 04:20:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:05.632 04:20:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.632 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:05.632 04:20:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:05.632 04:20:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.632 04:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:05.632 [2024-12-06 04:20:18.094284] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:05.632 [2024-12-06 04:20:18.094434] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83050 ] 00:17:05.899 [2024-12-06 04:20:18.235227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.899 [2024-12-06 04:20:18.330304] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.899 [2024-12-06 04:20:18.330554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.836 04:20:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.836 04:20:19 -- common/autotest_common.sh@862 -- # return 0 00:17:06.836 04:20:19 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:06.836 04:20:19 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@72 -- # notify_id=0 00:17:06.836 04:20:19 -- host/discovery.sh@78 -- # get_subsystem_names 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # sort 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # xargs 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:17:06.836 04:20:19 -- host/discovery.sh@79 -- # get_bdev_list 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # sort 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # xargs 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:17:06.836 04:20:19 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@82 -- # get_subsystem_names 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # sort 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # xargs 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:17:06.836 04:20:19 -- host/discovery.sh@83 -- # get_bdev_list 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # sort 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # xargs 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:06.836 04:20:19 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@86 -- # get_subsystem_names 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # sort 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.836 04:20:19 -- host/discovery.sh@59 -- # xargs 00:17:06.836 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.836 04:20:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:17:06.836 04:20:19 -- host/discovery.sh@87 -- # get_bdev_list 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:06.836 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.836 04:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:06.836 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:06.837 04:20:19 -- host/discovery.sh@55 -- # sort 00:17:06.837 04:20:19 -- host/discovery.sh@55 -- # xargs 00:17:06.837 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:07.096 04:20:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:07.096 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.096 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:07.096 [2024-12-06 04:20:19.432645] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.096 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:17:07.096 04:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:07.096 04:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:07.096 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.096 04:20:19 -- host/discovery.sh@59 -- # sort 00:17:07.096 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:07.096 04:20:19 -- host/discovery.sh@59 -- # xargs 00:17:07.096 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:07.096 04:20:19 -- host/discovery.sh@93 -- # get_bdev_list 00:17:07.096 04:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:07.096 04:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.096 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.096 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:07.096 04:20:19 -- host/discovery.sh@55 -- # xargs 00:17:07.096 04:20:19 -- host/discovery.sh@55 -- # sort 00:17:07.096 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:17:07.096 04:20:19 -- host/discovery.sh@94 -- # get_notification_count 00:17:07.096 04:20:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:07.096 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.096 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:07.096 04:20:19 -- host/discovery.sh@74 -- # jq '. | length' 00:17:07.096 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@74 -- # notification_count=0 00:17:07.096 04:20:19 -- host/discovery.sh@75 -- # notify_id=0 00:17:07.096 04:20:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:07.096 04:20:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.096 04:20:19 -- common/autotest_common.sh@10 -- # set +x 00:17:07.096 04:20:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.096 04:20:19 -- host/discovery.sh@100 -- # sleep 1 00:17:07.666 [2024-12-06 04:20:20.113092] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:07.666 [2024-12-06 04:20:20.113148] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:07.666 [2024-12-06 04:20:20.113166] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:07.666 [2024-12-06 04:20:20.119150] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:07.666 [2024-12-06 04:20:20.175588] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:07.666 [2024-12-06 04:20:20.175633] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:08.235 04:20:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:17:08.235 04:20:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:08.235 04:20:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:08.235 04:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.235 04:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 04:20:20 -- host/discovery.sh@59 -- # sort 00:17:08.235 04:20:20 -- host/discovery.sh@59 -- # xargs 00:17:08.235 04:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@102 -- # get_bdev_list 00:17:08.235 04:20:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:08.235 04:20:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.235 04:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.235 04:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 04:20:20 -- host/discovery.sh@55 -- # sort 00:17:08.235 04:20:20 -- host/discovery.sh@55 -- # xargs 00:17:08.235 04:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:17:08.235 04:20:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:08.235 04:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.235 04:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 04:20:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:08.235 04:20:20 -- host/discovery.sh@63 -- # xargs 00:17:08.235 04:20:20 -- host/discovery.sh@63 -- # sort -n 00:17:08.235 04:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:17:08.235 04:20:20 -- host/discovery.sh@104 -- # get_notification_count 00:17:08.235 04:20:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:08.235 04:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.235 04:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:08.235 04:20:20 -- host/discovery.sh@74 -- # jq '. | length' 00:17:08.235 04:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.494 04:20:20 -- host/discovery.sh@74 -- # notification_count=1 00:17:08.494 04:20:20 -- host/discovery.sh@75 -- # notify_id=1 00:17:08.494 04:20:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:17:08.494 04:20:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:08.494 04:20:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.494 04:20:20 -- common/autotest_common.sh@10 -- # set +x 00:17:08.494 04:20:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.494 04:20:20 -- host/discovery.sh@109 -- # sleep 1 00:17:09.431 04:20:21 -- host/discovery.sh@110 -- # get_bdev_list 00:17:09.431 04:20:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.431 04:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 04:20:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.431 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 04:20:21 -- host/discovery.sh@55 -- # xargs 00:17:09.431 04:20:21 -- host/discovery.sh@55 -- # sort 00:17:09.431 04:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 04:20:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:09.431 04:20:21 -- host/discovery.sh@111 -- # get_notification_count 00:17:09.431 04:20:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:09.431 04:20:21 -- host/discovery.sh@74 -- # jq '. | length' 00:17:09.431 04:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 04:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 04:20:21 -- host/discovery.sh@74 -- # notification_count=1 00:17:09.431 04:20:21 -- host/discovery.sh@75 -- # notify_id=2 00:17:09.431 04:20:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:17:09.431 04:20:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:09.431 04:20:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 04:20:21 -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 [2024-12-06 04:20:21.951316] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:09.431 [2024-12-06 04:20:21.952209] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:09.431 [2024-12-06 04:20:21.952256] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:09.431 04:20:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 04:20:21 -- host/discovery.sh@117 -- # sleep 1 00:17:09.431 [2024-12-06 04:20:21.958187] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:09.691 [2024-12-06 04:20:22.016466] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:09.691 [2024-12-06 04:20:22.016492] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:09.691 [2024-12-06 04:20:22.016514] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:10.630 04:20:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:17:10.630 04:20:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:10.630 04:20:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:10.630 04:20:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.630 04:20:22 -- host/discovery.sh@59 -- # sort 00:17:10.630 04:20:22 -- host/discovery.sh@59 -- # xargs 00:17:10.630 04:20:22 -- common/autotest_common.sh@10 -- # set +x 00:17:10.630 04:20:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@119 -- # get_bdev_list 00:17:10.630 04:20:23 -- host/discovery.sh@55 -- # sort 00:17:10.630 04:20:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:10.630 04:20:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.630 04:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.630 04:20:23 -- host/discovery.sh@55 -- # xargs 00:17:10.630 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.630 04:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:17:10.630 04:20:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:10.630 04:20:23 -- host/discovery.sh@63 -- # sort -n 00:17:10.630 04:20:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:10.630 04:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.630 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.630 04:20:23 -- host/discovery.sh@63 -- # xargs 00:17:10.630 04:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@121 -- # get_notification_count 00:17:10.630 04:20:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:10.630 04:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.630 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.630 04:20:23 -- host/discovery.sh@74 -- # jq '. | length' 00:17:10.630 04:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@74 -- # notification_count=0 00:17:10.630 04:20:23 -- host/discovery.sh@75 -- # notify_id=2 00:17:10.630 04:20:23 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:17:10.630 04:20:23 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:10.630 04:20:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.630 04:20:23 -- common/autotest_common.sh@10 -- # set +x 00:17:10.630 [2024-12-06 04:20:23.177850] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:10.630 [2024-12-06 04:20:23.177903] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:10.630 04:20:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.631 04:20:23 -- host/discovery.sh@127 -- # sleep 1 00:17:10.631 [2024-12-06 04:20:23.182905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.631 [2024-12-06 04:20:23.182970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.631 [2024-12-06 04:20:23.182981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.631 [2024-12-06 04:20:23.182990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.631 [2024-12-06 04:20:23.182999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.631 [2024-12-06 04:20:23.183007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.631 [2024-12-06 04:20:23.183016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.631 [2024-12-06 04:20:23.183024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.631 [2024-12-06 04:20:23.183032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad91f0 is same with the state(5) to be set 00:17:10.631 [2024-12-06 04:20:23.183855] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:10.631 [2024-12-06 04:20:23.183877] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:10.631 [2024-12-06 04:20:23.183932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad91f0 (9): Bad file descriptor 00:17:12.010 04:20:24 -- host/discovery.sh@128 -- # get_subsystem_names 00:17:12.010 04:20:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.010 04:20:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.010 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.010 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:12.010 04:20:24 -- host/discovery.sh@59 -- # sort 00:17:12.010 04:20:24 -- host/discovery.sh@59 -- # xargs 00:17:12.010 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@129 -- # get_bdev_list 00:17:12.010 04:20:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.010 04:20:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.010 04:20:24 -- host/discovery.sh@55 -- # sort 00:17:12.010 04:20:24 -- host/discovery.sh@55 -- # xargs 00:17:12.010 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.010 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:12.010 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:17:12.010 04:20:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:12.010 04:20:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:12.010 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.010 04:20:24 -- host/discovery.sh@63 -- # sort -n 00:17:12.010 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:12.010 04:20:24 -- host/discovery.sh@63 -- # xargs 00:17:12.010 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@131 -- # get_notification_count 00:17:12.010 04:20:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:12.010 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.010 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:12.010 04:20:24 -- host/discovery.sh@74 -- # jq '. | length' 00:17:12.010 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@74 -- # notification_count=0 00:17:12.010 04:20:24 -- host/discovery.sh@75 -- # notify_id=2 00:17:12.010 04:20:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:12.010 04:20:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.010 04:20:24 -- common/autotest_common.sh@10 -- # set +x 00:17:12.010 04:20:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.010 04:20:24 -- host/discovery.sh@135 -- # sleep 1 00:17:12.949 04:20:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:17:12.949 04:20:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.949 04:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.949 04:20:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.949 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 04:20:25 -- host/discovery.sh@59 -- # sort 00:17:12.949 04:20:25 -- host/discovery.sh@59 -- # xargs 00:17:12.949 04:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.949 04:20:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:17:12.949 04:20:25 -- host/discovery.sh@137 -- # get_bdev_list 00:17:12.949 04:20:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.949 04:20:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.949 04:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.949 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 04:20:25 -- host/discovery.sh@55 -- # sort 00:17:12.949 04:20:25 -- host/discovery.sh@55 -- # xargs 00:17:12.949 04:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.207 04:20:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:17:13.207 04:20:25 -- host/discovery.sh@138 -- # get_notification_count 00:17:13.207 04:20:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:13.207 04:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.207 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 04:20:25 -- host/discovery.sh@74 -- # jq '. | length' 00:17:13.207 04:20:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.207 04:20:25 -- host/discovery.sh@74 -- # notification_count=2 00:17:13.207 04:20:25 -- host/discovery.sh@75 -- # notify_id=4 00:17:13.207 04:20:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:17:13.207 04:20:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:13.207 04:20:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.207 04:20:25 -- common/autotest_common.sh@10 -- # set +x 00:17:14.145 [2024-12-06 04:20:26.615716] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:14.145 [2024-12-06 04:20:26.615760] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:14.145 [2024-12-06 04:20:26.615777] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:14.145 [2024-12-06 04:20:26.621744] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:14.145 [2024-12-06 04:20:26.681107] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:14.145 [2024-12-06 04:20:26.681185] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:14.145 04:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.145 04:20:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.145 04:20:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.145 04:20:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.145 04:20:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:14.145 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.145 04:20:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:14.145 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.145 04:20:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.145 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.145 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.145 request: 00:17:14.145 { 00:17:14.145 "name": "nvme", 00:17:14.145 "trtype": "tcp", 00:17:14.145 "traddr": "10.0.0.2", 00:17:14.145 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:14.145 "adrfam": "ipv4", 00:17:14.145 "trsvcid": "8009", 00:17:14.145 "wait_for_attach": true, 00:17:14.145 "method": "bdev_nvme_start_discovery", 00:17:14.145 "req_id": 1 00:17:14.145 } 00:17:14.145 Got JSON-RPC error response 00:17:14.145 response: 00:17:14.145 { 00:17:14.145 "code": -17, 00:17:14.145 "message": "File exists" 00:17:14.145 } 00:17:14.145 04:20:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:14.145 04:20:26 -- common/autotest_common.sh@653 -- # es=1 00:17:14.145 04:20:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.145 04:20:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.145 04:20:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.145 04:20:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:17:14.145 04:20:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:14.145 04:20:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:14.145 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # sort 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # xargs 00:17:14.405 04:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:17:14.405 04:20:26 -- host/discovery.sh@147 -- # get_bdev_list 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.405 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # sort 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # xargs 00:17:14.405 04:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.405 04:20:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.405 04:20:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.405 04:20:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.405 04:20:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:14.405 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.405 request: 00:17:14.405 { 00:17:14.405 "name": "nvme_second", 00:17:14.405 "trtype": "tcp", 00:17:14.405 "traddr": "10.0.0.2", 00:17:14.405 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:14.405 "adrfam": "ipv4", 00:17:14.405 "trsvcid": "8009", 00:17:14.405 "wait_for_attach": true, 00:17:14.405 "method": "bdev_nvme_start_discovery", 00:17:14.405 "req_id": 1 00:17:14.405 } 00:17:14.405 Got JSON-RPC error response 00:17:14.405 response: 00:17:14.405 { 00:17:14.405 "code": -17, 00:17:14.405 "message": "File exists" 00:17:14.405 } 00:17:14.405 04:20:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:14.405 04:20:26 -- common/autotest_common.sh@653 -- # es=1 00:17:14.405 04:20:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.405 04:20:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.405 04:20:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.405 04:20:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:14.405 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # sort 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.405 04:20:26 -- host/discovery.sh@67 -- # xargs 00:17:14.405 04:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:17:14.405 04:20:26 -- host/discovery.sh@153 -- # get_bdev_list 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.405 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # sort 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # xargs 00:17:14.405 04:20:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.405 04:20:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.405 04:20:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:14.405 04:20:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:14.405 04:20:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:14.405 04:20:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:14.405 04:20:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:14.405 04:20:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:14.405 04:20:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.405 04:20:26 -- common/autotest_common.sh@10 -- # set +x 00:17:15.784 [2024-12-06 04:20:27.959110] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.784 [2024-12-06 04:20:27.959228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.784 [2024-12-06 04:20:27.959272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:15.784 [2024-12-06 04:20:27.959288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b725c0 with addr=10.0.0.2, port=8010 00:17:15.784 [2024-12-06 04:20:27.959310] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:15.784 [2024-12-06 04:20:27.959319] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:15.784 [2024-12-06 04:20:27.959328] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:16.722 [2024-12-06 04:20:28.959125] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.722 [2024-12-06 04:20:28.959236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.722 [2024-12-06 04:20:28.959279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:16.722 [2024-12-06 04:20:28.959296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b34bc0 with addr=10.0.0.2, port=8010 00:17:16.722 [2024-12-06 04:20:28.959319] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:16.722 [2024-12-06 04:20:28.959328] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:16.722 [2024-12-06 04:20:28.959338] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:17.661 [2024-12-06 04:20:29.958982] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:17.661 request: 00:17:17.661 { 00:17:17.661 "name": "nvme_second", 00:17:17.661 "trtype": "tcp", 00:17:17.661 "traddr": "10.0.0.2", 00:17:17.661 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:17.661 "adrfam": "ipv4", 00:17:17.661 "trsvcid": "8010", 00:17:17.661 "attach_timeout_ms": 3000, 00:17:17.661 "method": "bdev_nvme_start_discovery", 00:17:17.661 "req_id": 1 00:17:17.661 } 00:17:17.661 Got JSON-RPC error response 00:17:17.661 response: 00:17:17.661 { 00:17:17.661 "code": -110, 00:17:17.661 "message": "Connection timed out" 00:17:17.661 } 00:17:17.661 04:20:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:17.661 04:20:29 -- common/autotest_common.sh@653 -- # es=1 00:17:17.661 04:20:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.661 04:20:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.661 04:20:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.661 04:20:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:17:17.661 04:20:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:17.661 04:20:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.661 04:20:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:17.661 04:20:29 -- common/autotest_common.sh@10 -- # set +x 00:17:17.661 04:20:29 -- host/discovery.sh@67 -- # sort 00:17:17.661 04:20:29 -- host/discovery.sh@67 -- # xargs 00:17:17.661 04:20:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.661 04:20:30 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:17:17.661 04:20:30 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:17:17.661 04:20:30 -- host/discovery.sh@162 -- # kill 83050 00:17:17.661 04:20:30 -- host/discovery.sh@163 -- # nvmftestfini 00:17:17.661 04:20:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:17.661 04:20:30 -- nvmf/common.sh@116 -- # sync 00:17:17.661 04:20:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:17.661 04:20:30 -- nvmf/common.sh@119 -- # set +e 00:17:17.661 04:20:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:17.661 04:20:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:17.661 rmmod nvme_tcp 00:17:17.661 rmmod nvme_fabrics 00:17:17.661 rmmod nvme_keyring 00:17:17.661 04:20:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:17.661 04:20:30 -- nvmf/common.sh@123 -- # set -e 00:17:17.661 04:20:30 -- nvmf/common.sh@124 -- # return 0 00:17:17.661 04:20:30 -- nvmf/common.sh@477 -- # '[' -n 83017 ']' 00:17:17.661 04:20:30 -- nvmf/common.sh@478 -- # killprocess 83017 00:17:17.661 04:20:30 -- common/autotest_common.sh@936 -- # '[' -z 83017 ']' 00:17:17.661 04:20:30 -- common/autotest_common.sh@940 -- # kill -0 83017 00:17:17.661 04:20:30 -- common/autotest_common.sh@941 -- # uname 00:17:17.661 04:20:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.661 04:20:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83017 00:17:17.661 04:20:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:17.661 killing process with pid 83017 00:17:17.661 04:20:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:17.661 04:20:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83017' 00:17:17.661 04:20:30 -- common/autotest_common.sh@955 -- # kill 83017 00:17:17.661 04:20:30 -- common/autotest_common.sh@960 -- # wait 83017 00:17:17.921 04:20:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:17.921 04:20:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:17.921 04:20:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:17.921 04:20:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.921 04:20:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:17.921 04:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.921 04:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.921 04:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.921 04:20:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:17.921 00:17:17.921 real 0m14.069s 00:17:17.921 user 0m26.923s 00:17:17.921 sys 0m2.314s 00:17:17.921 04:20:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:17.921 04:20:30 -- common/autotest_common.sh@10 -- # set +x 00:17:17.921 ************************************ 00:17:17.921 END TEST nvmf_discovery 00:17:17.921 ************************************ 00:17:17.921 04:20:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:17.921 04:20:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:17.921 04:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.921 04:20:30 -- common/autotest_common.sh@10 -- # set +x 00:17:18.182 ************************************ 00:17:18.182 START TEST nvmf_discovery_remove_ifc 00:17:18.182 ************************************ 00:17:18.182 04:20:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:18.182 * Looking for test storage... 00:17:18.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.182 04:20:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:18.182 04:20:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:18.182 04:20:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:18.182 04:20:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:18.182 04:20:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:18.182 04:20:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:18.182 04:20:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:18.182 04:20:30 -- scripts/common.sh@335 -- # IFS=.-: 00:17:18.182 04:20:30 -- scripts/common.sh@335 -- # read -ra ver1 00:17:18.182 04:20:30 -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.182 04:20:30 -- scripts/common.sh@336 -- # read -ra ver2 00:17:18.182 04:20:30 -- scripts/common.sh@337 -- # local 'op=<' 00:17:18.182 04:20:30 -- scripts/common.sh@339 -- # ver1_l=2 00:17:18.182 04:20:30 -- scripts/common.sh@340 -- # ver2_l=1 00:17:18.182 04:20:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:18.182 04:20:30 -- scripts/common.sh@343 -- # case "$op" in 00:17:18.182 04:20:30 -- scripts/common.sh@344 -- # : 1 00:17:18.182 04:20:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:18.182 04:20:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.182 04:20:30 -- scripts/common.sh@364 -- # decimal 1 00:17:18.182 04:20:30 -- scripts/common.sh@352 -- # local d=1 00:17:18.182 04:20:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.182 04:20:30 -- scripts/common.sh@354 -- # echo 1 00:17:18.182 04:20:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:18.182 04:20:30 -- scripts/common.sh@365 -- # decimal 2 00:17:18.182 04:20:30 -- scripts/common.sh@352 -- # local d=2 00:17:18.182 04:20:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.182 04:20:30 -- scripts/common.sh@354 -- # echo 2 00:17:18.182 04:20:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:18.182 04:20:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:18.182 04:20:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:18.182 04:20:30 -- scripts/common.sh@367 -- # return 0 00:17:18.182 04:20:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.182 04:20:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.182 --rc genhtml_branch_coverage=1 00:17:18.182 --rc genhtml_function_coverage=1 00:17:18.182 --rc genhtml_legend=1 00:17:18.182 --rc geninfo_all_blocks=1 00:17:18.182 --rc geninfo_unexecuted_blocks=1 00:17:18.182 00:17:18.182 ' 00:17:18.182 04:20:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.182 --rc genhtml_branch_coverage=1 00:17:18.182 --rc genhtml_function_coverage=1 00:17:18.182 --rc genhtml_legend=1 00:17:18.182 --rc geninfo_all_blocks=1 00:17:18.182 --rc geninfo_unexecuted_blocks=1 00:17:18.182 00:17:18.182 ' 00:17:18.182 04:20:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.182 --rc genhtml_branch_coverage=1 00:17:18.182 --rc genhtml_function_coverage=1 00:17:18.182 --rc genhtml_legend=1 00:17:18.182 --rc geninfo_all_blocks=1 00:17:18.182 --rc geninfo_unexecuted_blocks=1 00:17:18.182 00:17:18.182 ' 00:17:18.182 04:20:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:18.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.182 --rc genhtml_branch_coverage=1 00:17:18.182 --rc genhtml_function_coverage=1 00:17:18.182 --rc genhtml_legend=1 00:17:18.182 --rc geninfo_all_blocks=1 00:17:18.182 --rc geninfo_unexecuted_blocks=1 00:17:18.182 00:17:18.182 ' 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.182 04:20:30 -- nvmf/common.sh@7 -- # uname -s 00:17:18.182 04:20:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.182 04:20:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.182 04:20:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.182 04:20:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.182 04:20:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.182 04:20:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.182 04:20:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.182 04:20:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.182 04:20:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.182 04:20:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.182 04:20:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:18.182 04:20:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:18.182 04:20:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.182 04:20:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.182 04:20:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.182 04:20:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.182 04:20:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.182 04:20:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.182 04:20:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.182 04:20:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.182 04:20:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.182 04:20:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.182 04:20:30 -- paths/export.sh@5 -- # export PATH 00:17:18.182 04:20:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.182 04:20:30 -- nvmf/common.sh@46 -- # : 0 00:17:18.182 04:20:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:18.182 04:20:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:18.182 04:20:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:18.182 04:20:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.182 04:20:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.182 04:20:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:18.182 04:20:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:18.182 04:20:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:18.182 04:20:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:18.182 04:20:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:18.182 04:20:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.183 04:20:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:18.183 04:20:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:18.183 04:20:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:18.183 04:20:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.183 04:20:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.183 04:20:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.183 04:20:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:18.183 04:20:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:18.183 04:20:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:18.183 04:20:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:18.183 04:20:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:18.183 04:20:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:18.183 04:20:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.183 04:20:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.183 04:20:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.183 04:20:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:18.183 04:20:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.183 04:20:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.183 04:20:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.183 04:20:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.183 04:20:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.183 04:20:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.183 04:20:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.183 04:20:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.183 04:20:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:18.183 04:20:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:18.183 Cannot find device "nvmf_tgt_br" 00:17:18.183 04:20:30 -- nvmf/common.sh@154 -- # true 00:17:18.183 04:20:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.183 Cannot find device "nvmf_tgt_br2" 00:17:18.183 04:20:30 -- nvmf/common.sh@155 -- # true 00:17:18.183 04:20:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:18.443 04:20:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:18.443 Cannot find device "nvmf_tgt_br" 00:17:18.443 04:20:30 -- nvmf/common.sh@157 -- # true 00:17:18.443 04:20:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:18.443 Cannot find device "nvmf_tgt_br2" 00:17:18.443 04:20:30 -- nvmf/common.sh@158 -- # true 00:17:18.443 04:20:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:18.443 04:20:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:18.443 04:20:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.443 04:20:30 -- nvmf/common.sh@161 -- # true 00:17:18.443 04:20:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.443 04:20:30 -- nvmf/common.sh@162 -- # true 00:17:18.443 04:20:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.443 04:20:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.443 04:20:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.443 04:20:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.443 04:20:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.443 04:20:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.443 04:20:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.443 04:20:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.443 04:20:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.443 04:20:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:18.443 04:20:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:18.443 04:20:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:18.443 04:20:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:18.443 04:20:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.443 04:20:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.443 04:20:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.443 04:20:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:18.443 04:20:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:18.443 04:20:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.443 04:20:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.443 04:20:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.443 04:20:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.443 04:20:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.443 04:20:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:18.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:18.443 00:17:18.443 --- 10.0.0.2 ping statistics --- 00:17:18.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.443 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:18.443 04:20:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:18.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:18.443 00:17:18.443 --- 10.0.0.3 ping statistics --- 00:17:18.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.443 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:18.443 04:20:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:18.703 00:17:18.703 --- 10.0.0.1 ping statistics --- 00:17:18.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.703 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:18.703 04:20:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.703 04:20:31 -- nvmf/common.sh@421 -- # return 0 00:17:18.703 04:20:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:18.703 04:20:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.703 04:20:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:18.703 04:20:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:18.703 04:20:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.703 04:20:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:18.703 04:20:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:18.703 04:20:31 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:18.703 04:20:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:18.703 04:20:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.703 04:20:31 -- common/autotest_common.sh@10 -- # set +x 00:17:18.703 04:20:31 -- nvmf/common.sh@469 -- # nvmfpid=83552 00:17:18.703 04:20:31 -- nvmf/common.sh@470 -- # waitforlisten 83552 00:17:18.703 04:20:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:18.703 04:20:31 -- common/autotest_common.sh@829 -- # '[' -z 83552 ']' 00:17:18.703 04:20:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.703 04:20:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.703 04:20:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.703 04:20:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.703 04:20:31 -- common/autotest_common.sh@10 -- # set +x 00:17:18.703 [2024-12-06 04:20:31.089671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.703 [2024-12-06 04:20:31.089783] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.703 [2024-12-06 04:20:31.230188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.962 [2024-12-06 04:20:31.322048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:18.962 [2024-12-06 04:20:31.322197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.962 [2024-12-06 04:20:31.322209] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.962 [2024-12-06 04:20:31.322217] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.962 [2024-12-06 04:20:31.322247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.529 04:20:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.529 04:20:32 -- common/autotest_common.sh@862 -- # return 0 00:17:19.529 04:20:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:19.529 04:20:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.529 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 04:20:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.788 04:20:32 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:19.788 04:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.788 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 [2024-12-06 04:20:32.112686] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.788 [2024-12-06 04:20:32.120859] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:19.788 null0 00:17:19.788 [2024-12-06 04:20:32.152729] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.788 04:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.788 04:20:32 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83584 00:17:19.788 04:20:32 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:19.788 04:20:32 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83584 /tmp/host.sock 00:17:19.788 04:20:32 -- common/autotest_common.sh@829 -- # '[' -z 83584 ']' 00:17:19.788 04:20:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:19.788 04:20:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.788 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:19.788 04:20:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:19.788 04:20:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.788 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:19.788 [2024-12-06 04:20:32.217782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.788 [2024-12-06 04:20:32.217871] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83584 ] 00:17:20.047 [2024-12-06 04:20:32.352009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.047 [2024-12-06 04:20:32.445436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.047 [2024-12-06 04:20:32.445623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.048 04:20:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.048 04:20:32 -- common/autotest_common.sh@862 -- # return 0 00:17:20.048 04:20:32 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.048 04:20:32 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:20.048 04:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.048 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:20.048 04:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.048 04:20:32 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:20.048 04:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.048 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:20.048 04:20:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.048 04:20:32 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:20.048 04:20:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.048 04:20:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.425 [2024-12-06 04:20:33.605845] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:21.425 [2024-12-06 04:20:33.605940] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:21.425 [2024-12-06 04:20:33.605992] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:21.425 [2024-12-06 04:20:33.611938] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:21.425 [2024-12-06 04:20:33.667985] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:21.425 [2024-12-06 04:20:33.668053] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:21.425 [2024-12-06 04:20:33.668080] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:21.425 [2024-12-06 04:20:33.668096] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:21.425 [2024-12-06 04:20:33.668120] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:21.425 04:20:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.425 [2024-12-06 04:20:33.674373] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x249b2c0 was disconnected and freed. delete nvme_qpair. 00:17:21.425 04:20:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.425 04:20:33 -- common/autotest_common.sh@10 -- # set +x 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.425 04:20:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.425 04:20:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:21.425 04:20:33 -- common/autotest_common.sh@10 -- # set +x 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:21.425 04:20:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:21.425 04:20:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:22.359 04:20:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.359 04:20:34 -- common/autotest_common.sh@10 -- # set +x 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:22.359 04:20:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:22.359 04:20:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:23.296 04:20:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:23.296 04:20:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.296 04:20:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.296 04:20:35 -- common/autotest_common.sh@10 -- # set +x 00:17:23.296 04:20:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:23.296 04:20:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:23.296 04:20:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:23.554 04:20:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.554 04:20:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:23.554 04:20:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:24.489 04:20:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.489 04:20:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.490 04:20:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.490 04:20:36 -- common/autotest_common.sh@10 -- # set +x 00:17:24.490 04:20:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.490 04:20:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.490 04:20:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.490 04:20:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.490 04:20:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:24.490 04:20:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:25.424 04:20:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.424 04:20:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.424 04:20:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.424 04:20:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.424 04:20:37 -- common/autotest_common.sh@10 -- # set +x 00:17:25.424 04:20:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.424 04:20:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.425 04:20:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.693 04:20:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:25.693 04:20:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.655 04:20:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.655 04:20:39 -- common/autotest_common.sh@10 -- # set +x 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.655 04:20:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:26.655 04:20:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.655 [2024-12-06 04:20:39.095818] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:26.655 [2024-12-06 04:20:39.095893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.655 [2024-12-06 04:20:39.095910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.655 [2024-12-06 04:20:39.095922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.655 [2024-12-06 04:20:39.095931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.655 [2024-12-06 04:20:39.095940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.655 [2024-12-06 04:20:39.095949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.655 [2024-12-06 04:20:39.095957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.655 [2024-12-06 04:20:39.095966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.655 [2024-12-06 04:20:39.095975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.655 [2024-12-06 04:20:39.095983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.655 [2024-12-06 04:20:39.095991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245f6c0 is same with the state(5) to be set 00:17:26.655 [2024-12-06 04:20:39.105805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245f6c0 (9): Bad file descriptor 00:17:26.655 [2024-12-06 04:20:39.115838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:27.594 04:20:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.594 04:20:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.594 04:20:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.594 04:20:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.594 04:20:40 -- common/autotest_common.sh@10 -- # set +x 00:17:27.594 04:20:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.594 04:20:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.854 [2024-12-06 04:20:40.171474] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:28.793 [2024-12-06 04:20:41.196503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:29.732 [2024-12-06 04:20:42.219527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:29.732 [2024-12-06 04:20:42.219687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245f6c0 with addr=10.0.0.2, port=4420 00:17:29.732 [2024-12-06 04:20:42.219726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245f6c0 is same with the state(5) to be set 00:17:29.732 [2024-12-06 04:20:42.219782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:29.732 [2024-12-06 04:20:42.219806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:29.732 [2024-12-06 04:20:42.219825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:29.732 [2024-12-06 04:20:42.219847] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:29.732 [2024-12-06 04:20:42.220675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245f6c0 (9): Bad file descriptor 00:17:29.732 [2024-12-06 04:20:42.220738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.732 [2024-12-06 04:20:42.220788] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:29.732 [2024-12-06 04:20:42.220855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.732 [2024-12-06 04:20:42.220885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.732 [2024-12-06 04:20:42.220921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.732 [2024-12-06 04:20:42.220942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.732 [2024-12-06 04:20:42.220963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.732 [2024-12-06 04:20:42.220984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.732 [2024-12-06 04:20:42.221006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.732 [2024-12-06 04:20:42.221026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.732 [2024-12-06 04:20:42.221058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.732 [2024-12-06 04:20:42.221078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.732 [2024-12-06 04:20:42.221098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:29.732 [2024-12-06 04:20:42.221158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245fad0 (9): Bad file descriptor 00:17:29.732 [2024-12-06 04:20:42.222160] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:29.732 [2024-12-06 04:20:42.222208] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:29.732 04:20:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.733 04:20:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:29.733 04:20:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.112 04:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.112 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.112 04:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.112 04:20:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.112 04:20:43 -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.112 04:20:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:31.112 04:20:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:31.680 [2024-12-06 04:20:44.230921] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:31.680 [2024-12-06 04:20:44.230956] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:31.680 [2024-12-06 04:20:44.230991] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:31.680 [2024-12-06 04:20:44.236955] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:31.940 [2024-12-06 04:20:44.292308] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:31.940 [2024-12-06 04:20:44.292391] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:31.940 [2024-12-06 04:20:44.292426] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:31.940 [2024-12-06 04:20:44.292443] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:31.940 [2024-12-06 04:20:44.292453] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:31.940 [2024-12-06 04:20:44.299665] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x246c930 was disconnected and freed. delete nvme_qpair. 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:31.940 04:20:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.940 04:20:44 -- common/autotest_common.sh@10 -- # set +x 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:31.940 04:20:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:31.940 04:20:44 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83584 00:17:31.940 04:20:44 -- common/autotest_common.sh@936 -- # '[' -z 83584 ']' 00:17:31.940 04:20:44 -- common/autotest_common.sh@940 -- # kill -0 83584 00:17:31.940 04:20:44 -- common/autotest_common.sh@941 -- # uname 00:17:31.940 04:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.940 04:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83584 00:17:31.940 04:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.940 killing process with pid 83584 00:17:31.940 04:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.940 04:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83584' 00:17:31.940 04:20:44 -- common/autotest_common.sh@955 -- # kill 83584 00:17:31.940 04:20:44 -- common/autotest_common.sh@960 -- # wait 83584 00:17:32.199 04:20:44 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:32.199 04:20:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:32.199 04:20:44 -- nvmf/common.sh@116 -- # sync 00:17:32.199 04:20:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:32.199 04:20:44 -- nvmf/common.sh@119 -- # set +e 00:17:32.199 04:20:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:32.199 04:20:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:32.199 rmmod nvme_tcp 00:17:32.459 rmmod nvme_fabrics 00:17:32.459 rmmod nvme_keyring 00:17:32.459 04:20:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:32.459 04:20:44 -- nvmf/common.sh@123 -- # set -e 00:17:32.459 04:20:44 -- nvmf/common.sh@124 -- # return 0 00:17:32.459 04:20:44 -- nvmf/common.sh@477 -- # '[' -n 83552 ']' 00:17:32.459 04:20:44 -- nvmf/common.sh@478 -- # killprocess 83552 00:17:32.459 04:20:44 -- common/autotest_common.sh@936 -- # '[' -z 83552 ']' 00:17:32.459 04:20:44 -- common/autotest_common.sh@940 -- # kill -0 83552 00:17:32.459 04:20:44 -- common/autotest_common.sh@941 -- # uname 00:17:32.459 04:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.459 04:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83552 00:17:32.459 killing process with pid 83552 00:17:32.459 04:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:32.459 04:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:32.459 04:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83552' 00:17:32.459 04:20:44 -- common/autotest_common.sh@955 -- # kill 83552 00:17:32.459 04:20:44 -- common/autotest_common.sh@960 -- # wait 83552 00:17:32.735 04:20:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:32.735 04:20:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:32.735 04:20:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:32.736 04:20:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.736 04:20:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:32.736 04:20:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.736 04:20:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.736 04:20:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.736 04:20:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:32.736 ************************************ 00:17:32.736 END TEST nvmf_discovery_remove_ifc 00:17:32.736 ************************************ 00:17:32.736 00:17:32.736 real 0m14.622s 00:17:32.736 user 0m23.086s 00:17:32.736 sys 0m2.344s 00:17:32.736 04:20:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:32.736 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:32.736 04:20:45 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:32.736 04:20:45 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:32.736 04:20:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.736 04:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.736 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:32.736 ************************************ 00:17:32.736 START TEST nvmf_digest 00:17:32.736 ************************************ 00:17:32.736 04:20:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:32.736 * Looking for test storage... 00:17:32.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.736 04:20:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:32.736 04:20:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:32.736 04:20:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:32.996 04:20:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:32.996 04:20:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:32.996 04:20:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:32.996 04:20:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:32.996 04:20:45 -- scripts/common.sh@335 -- # IFS=.-: 00:17:32.996 04:20:45 -- scripts/common.sh@335 -- # read -ra ver1 00:17:32.996 04:20:45 -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.996 04:20:45 -- scripts/common.sh@336 -- # read -ra ver2 00:17:32.996 04:20:45 -- scripts/common.sh@337 -- # local 'op=<' 00:17:32.996 04:20:45 -- scripts/common.sh@339 -- # ver1_l=2 00:17:32.996 04:20:45 -- scripts/common.sh@340 -- # ver2_l=1 00:17:32.996 04:20:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:32.996 04:20:45 -- scripts/common.sh@343 -- # case "$op" in 00:17:32.996 04:20:45 -- scripts/common.sh@344 -- # : 1 00:17:32.996 04:20:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:32.996 04:20:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.997 04:20:45 -- scripts/common.sh@364 -- # decimal 1 00:17:32.997 04:20:45 -- scripts/common.sh@352 -- # local d=1 00:17:32.997 04:20:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.997 04:20:45 -- scripts/common.sh@354 -- # echo 1 00:17:32.997 04:20:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:32.997 04:20:45 -- scripts/common.sh@365 -- # decimal 2 00:17:32.997 04:20:45 -- scripts/common.sh@352 -- # local d=2 00:17:32.997 04:20:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.997 04:20:45 -- scripts/common.sh@354 -- # echo 2 00:17:32.997 04:20:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:32.997 04:20:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:32.997 04:20:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:32.997 04:20:45 -- scripts/common.sh@367 -- # return 0 00:17:32.997 04:20:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.997 04:20:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:32.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.997 --rc genhtml_branch_coverage=1 00:17:32.997 --rc genhtml_function_coverage=1 00:17:32.997 --rc genhtml_legend=1 00:17:32.997 --rc geninfo_all_blocks=1 00:17:32.997 --rc geninfo_unexecuted_blocks=1 00:17:32.997 00:17:32.997 ' 00:17:32.997 04:20:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:32.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.997 --rc genhtml_branch_coverage=1 00:17:32.997 --rc genhtml_function_coverage=1 00:17:32.997 --rc genhtml_legend=1 00:17:32.997 --rc geninfo_all_blocks=1 00:17:32.997 --rc geninfo_unexecuted_blocks=1 00:17:32.997 00:17:32.997 ' 00:17:32.997 04:20:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:32.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.997 --rc genhtml_branch_coverage=1 00:17:32.997 --rc genhtml_function_coverage=1 00:17:32.997 --rc genhtml_legend=1 00:17:32.997 --rc geninfo_all_blocks=1 00:17:32.997 --rc geninfo_unexecuted_blocks=1 00:17:32.997 00:17:32.997 ' 00:17:32.997 04:20:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:32.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.997 --rc genhtml_branch_coverage=1 00:17:32.997 --rc genhtml_function_coverage=1 00:17:32.997 --rc genhtml_legend=1 00:17:32.997 --rc geninfo_all_blocks=1 00:17:32.997 --rc geninfo_unexecuted_blocks=1 00:17:32.997 00:17:32.997 ' 00:17:32.997 04:20:45 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.997 04:20:45 -- nvmf/common.sh@7 -- # uname -s 00:17:32.997 04:20:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.997 04:20:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.997 04:20:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.997 04:20:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.997 04:20:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.997 04:20:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.997 04:20:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.997 04:20:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.997 04:20:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.997 04:20:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:32.997 04:20:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:17:32.997 04:20:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.997 04:20:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.997 04:20:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.997 04:20:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.997 04:20:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.997 04:20:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.997 04:20:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.997 04:20:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.997 04:20:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.997 04:20:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.997 04:20:45 -- paths/export.sh@5 -- # export PATH 00:17:32.997 04:20:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.997 04:20:45 -- nvmf/common.sh@46 -- # : 0 00:17:32.997 04:20:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:32.997 04:20:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:32.997 04:20:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:32.997 04:20:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.997 04:20:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.997 04:20:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:32.997 04:20:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:32.997 04:20:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:32.997 04:20:45 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:32.997 04:20:45 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:32.997 04:20:45 -- host/digest.sh@16 -- # runtime=2 00:17:32.997 04:20:45 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:32.997 04:20:45 -- host/digest.sh@132 -- # nvmftestinit 00:17:32.997 04:20:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:32.997 04:20:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.997 04:20:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:32.997 04:20:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:32.997 04:20:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:32.997 04:20:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.997 04:20:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.997 04:20:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.997 04:20:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:32.997 04:20:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:32.997 04:20:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.997 04:20:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.997 04:20:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:32.997 04:20:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:32.997 04:20:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.997 04:20:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.997 04:20:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.997 04:20:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.997 04:20:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.997 04:20:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.997 04:20:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.997 04:20:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.997 04:20:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:32.997 04:20:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:32.997 Cannot find device "nvmf_tgt_br" 00:17:32.997 04:20:45 -- nvmf/common.sh@154 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.997 Cannot find device "nvmf_tgt_br2" 00:17:32.997 04:20:45 -- nvmf/common.sh@155 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:32.997 04:20:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:32.997 Cannot find device "nvmf_tgt_br" 00:17:32.997 04:20:45 -- nvmf/common.sh@157 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:32.997 Cannot find device "nvmf_tgt_br2" 00:17:32.997 04:20:45 -- nvmf/common.sh@158 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:32.997 04:20:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:32.997 04:20:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.997 04:20:45 -- nvmf/common.sh@161 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.997 04:20:45 -- nvmf/common.sh@162 -- # true 00:17:32.997 04:20:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.997 04:20:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.997 04:20:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.997 04:20:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:33.256 04:20:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:33.256 04:20:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:33.256 04:20:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:33.256 04:20:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:33.256 04:20:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:33.256 04:20:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:33.256 04:20:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:33.256 04:20:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:33.256 04:20:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:33.256 04:20:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:33.256 04:20:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:33.256 04:20:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:33.256 04:20:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:33.256 04:20:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:33.256 04:20:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:33.256 04:20:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:33.256 04:20:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:33.256 04:20:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:33.256 04:20:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:33.256 04:20:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:33.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:33.256 00:17:33.256 --- 10.0.0.2 ping statistics --- 00:17:33.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.256 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:33.256 04:20:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:33.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:33.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:33.256 00:17:33.256 --- 10.0.0.3 ping statistics --- 00:17:33.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.256 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:33.256 04:20:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:33.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:33.256 00:17:33.256 --- 10.0.0.1 ping statistics --- 00:17:33.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.256 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:33.256 04:20:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.256 04:20:45 -- nvmf/common.sh@421 -- # return 0 00:17:33.256 04:20:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:33.256 04:20:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.256 04:20:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:33.256 04:20:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:33.256 04:20:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.256 04:20:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:33.256 04:20:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:33.256 04:20:45 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:33.256 04:20:45 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:33.256 04:20:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:33.256 04:20:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.256 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:33.256 ************************************ 00:17:33.256 START TEST nvmf_digest_clean 00:17:33.256 ************************************ 00:17:33.256 04:20:45 -- common/autotest_common.sh@1114 -- # run_digest 00:17:33.256 04:20:45 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:33.256 04:20:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.256 04:20:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.256 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:33.256 04:20:45 -- nvmf/common.sh@469 -- # nvmfpid=83996 00:17:33.256 04:20:45 -- nvmf/common.sh@470 -- # waitforlisten 83996 00:17:33.256 04:20:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:33.257 04:20:45 -- common/autotest_common.sh@829 -- # '[' -z 83996 ']' 00:17:33.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.257 04:20:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.257 04:20:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.257 04:20:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.257 04:20:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.257 04:20:45 -- common/autotest_common.sh@10 -- # set +x 00:17:33.515 [2024-12-06 04:20:45.854324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:33.515 [2024-12-06 04:20:45.854455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.515 [2024-12-06 04:20:45.998047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.774 [2024-12-06 04:20:46.081501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.774 [2024-12-06 04:20:46.081938] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.774 [2024-12-06 04:20:46.081966] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.774 [2024-12-06 04:20:46.081980] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.774 [2024-12-06 04:20:46.082011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.342 04:20:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.342 04:20:46 -- common/autotest_common.sh@862 -- # return 0 00:17:34.342 04:20:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.342 04:20:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.342 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.602 04:20:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.602 04:20:46 -- host/digest.sh@120 -- # common_target_config 00:17:34.602 04:20:46 -- host/digest.sh@43 -- # rpc_cmd 00:17:34.602 04:20:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.602 04:20:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.602 null0 00:17:34.602 [2024-12-06 04:20:47.027526] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.602 [2024-12-06 04:20:47.051653] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.602 04:20:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.602 04:20:47 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:34.602 04:20:47 -- host/digest.sh@77 -- # local rw bs qd 00:17:34.602 04:20:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:34.602 04:20:47 -- host/digest.sh@80 -- # rw=randread 00:17:34.602 04:20:47 -- host/digest.sh@80 -- # bs=4096 00:17:34.602 04:20:47 -- host/digest.sh@80 -- # qd=128 00:17:34.602 04:20:47 -- host/digest.sh@82 -- # bperfpid=84028 00:17:34.602 04:20:47 -- host/digest.sh@83 -- # waitforlisten 84028 /var/tmp/bperf.sock 00:17:34.602 04:20:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:34.602 04:20:47 -- common/autotest_common.sh@829 -- # '[' -z 84028 ']' 00:17:34.602 04:20:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:34.602 04:20:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.602 04:20:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:34.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:34.602 04:20:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.602 04:20:47 -- common/autotest_common.sh@10 -- # set +x 00:17:34.602 [2024-12-06 04:20:47.111299] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:34.602 [2024-12-06 04:20:47.111808] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84028 ] 00:17:34.861 [2024-12-06 04:20:47.255088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.861 [2024-12-06 04:20:47.351866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.821 04:20:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.821 04:20:48 -- common/autotest_common.sh@862 -- # return 0 00:17:35.821 04:20:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:35.821 04:20:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:35.821 04:20:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:36.080 04:20:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.080 04:20:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.340 nvme0n1 00:17:36.340 04:20:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:36.340 04:20:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:36.340 Running I/O for 2 seconds... 00:17:38.877 00:17:38.877 Latency(us) 00:17:38.877 [2024-12-06T04:20:51.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.877 [2024-12-06T04:20:51.442Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:38.877 nvme0n1 : 2.01 16679.87 65.16 0.00 0.00 7668.97 3038.49 17635.14 00:17:38.877 [2024-12-06T04:20:51.442Z] =================================================================================================================== 00:17:38.877 [2024-12-06T04:20:51.442Z] Total : 16679.87 65.16 0.00 0.00 7668.97 3038.49 17635.14 00:17:38.877 0 00:17:38.877 04:20:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:38.877 04:20:50 -- host/digest.sh@92 -- # get_accel_stats 00:17:38.877 04:20:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:38.877 04:20:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:38.877 04:20:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:38.877 | select(.opcode=="crc32c") 00:17:38.877 | "\(.module_name) \(.executed)"' 00:17:38.877 04:20:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:38.877 04:20:51 -- host/digest.sh@93 -- # exp_module=software 00:17:38.877 04:20:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:38.877 04:20:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:38.877 04:20:51 -- host/digest.sh@97 -- # killprocess 84028 00:17:38.877 04:20:51 -- common/autotest_common.sh@936 -- # '[' -z 84028 ']' 00:17:38.877 04:20:51 -- common/autotest_common.sh@940 -- # kill -0 84028 00:17:38.877 04:20:51 -- common/autotest_common.sh@941 -- # uname 00:17:38.877 04:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.877 04:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84028 00:17:38.877 killing process with pid 84028 00:17:38.877 Received shutdown signal, test time was about 2.000000 seconds 00:17:38.877 00:17:38.877 Latency(us) 00:17:38.877 [2024-12-06T04:20:51.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.877 [2024-12-06T04:20:51.442Z] =================================================================================================================== 00:17:38.877 [2024-12-06T04:20:51.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.877 04:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.877 04:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.877 04:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84028' 00:17:38.877 04:20:51 -- common/autotest_common.sh@955 -- # kill 84028 00:17:38.878 04:20:51 -- common/autotest_common.sh@960 -- # wait 84028 00:17:38.878 04:20:51 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:38.878 04:20:51 -- host/digest.sh@77 -- # local rw bs qd 00:17:38.878 04:20:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:38.878 04:20:51 -- host/digest.sh@80 -- # rw=randread 00:17:38.878 04:20:51 -- host/digest.sh@80 -- # bs=131072 00:17:38.878 04:20:51 -- host/digest.sh@80 -- # qd=16 00:17:38.878 04:20:51 -- host/digest.sh@82 -- # bperfpid=84090 00:17:38.878 04:20:51 -- host/digest.sh@83 -- # waitforlisten 84090 /var/tmp/bperf.sock 00:17:38.878 04:20:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:38.878 04:20:51 -- common/autotest_common.sh@829 -- # '[' -z 84090 ']' 00:17:38.878 04:20:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:38.878 04:20:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.878 04:20:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:38.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:38.878 04:20:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.878 04:20:51 -- common/autotest_common.sh@10 -- # set +x 00:17:38.878 [2024-12-06 04:20:51.430367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:38.878 [2024-12-06 04:20:51.430767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:17:38.878 Zero copy mechanism will not be used. 00:17:38.878 =spdk_pid84090 ] 00:17:39.137 [2024-12-06 04:20:51.570078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.137 [2024-12-06 04:20:51.654067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.137 04:20:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.137 04:20:51 -- common/autotest_common.sh@862 -- # return 0 00:17:39.137 04:20:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:39.137 04:20:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:39.137 04:20:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:39.706 04:20:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.706 04:20:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.966 nvme0n1 00:17:39.966 04:20:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:39.966 04:20:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:39.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:39.966 Zero copy mechanism will not be used. 00:17:39.966 Running I/O for 2 seconds... 00:17:42.503 00:17:42.503 Latency(us) 00:17:42.503 [2024-12-06T04:20:55.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.503 [2024-12-06T04:20:55.068Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:42.503 nvme0n1 : 2.00 7717.04 964.63 0.00 0.00 2070.32 1779.90 3023.59 00:17:42.503 [2024-12-06T04:20:55.068Z] =================================================================================================================== 00:17:42.503 [2024-12-06T04:20:55.068Z] Total : 7717.04 964.63 0.00 0.00 2070.32 1779.90 3023.59 00:17:42.503 0 00:17:42.503 04:20:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:42.503 04:20:54 -- host/digest.sh@92 -- # get_accel_stats 00:17:42.503 04:20:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:42.503 04:20:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:42.503 | select(.opcode=="crc32c") 00:17:42.503 | "\(.module_name) \(.executed)"' 00:17:42.503 04:20:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:42.503 04:20:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:42.503 04:20:54 -- host/digest.sh@93 -- # exp_module=software 00:17:42.503 04:20:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:42.503 04:20:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:42.503 04:20:54 -- host/digest.sh@97 -- # killprocess 84090 00:17:42.503 04:20:54 -- common/autotest_common.sh@936 -- # '[' -z 84090 ']' 00:17:42.503 04:20:54 -- common/autotest_common.sh@940 -- # kill -0 84090 00:17:42.503 04:20:54 -- common/autotest_common.sh@941 -- # uname 00:17:42.503 04:20:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.503 04:20:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84090 00:17:42.503 killing process with pid 84090 00:17:42.503 Received shutdown signal, test time was about 2.000000 seconds 00:17:42.503 00:17:42.503 Latency(us) 00:17:42.503 [2024-12-06T04:20:55.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.503 [2024-12-06T04:20:55.068Z] =================================================================================================================== 00:17:42.503 [2024-12-06T04:20:55.068Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.503 04:20:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:42.503 04:20:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:42.503 04:20:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84090' 00:17:42.503 04:20:54 -- common/autotest_common.sh@955 -- # kill 84090 00:17:42.503 04:20:54 -- common/autotest_common.sh@960 -- # wait 84090 00:17:42.503 04:20:55 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:42.503 04:20:55 -- host/digest.sh@77 -- # local rw bs qd 00:17:42.503 04:20:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:42.503 04:20:55 -- host/digest.sh@80 -- # rw=randwrite 00:17:42.503 04:20:55 -- host/digest.sh@80 -- # bs=4096 00:17:42.503 04:20:55 -- host/digest.sh@80 -- # qd=128 00:17:42.503 04:20:55 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:42.503 04:20:55 -- host/digest.sh@82 -- # bperfpid=84141 00:17:42.503 04:20:55 -- host/digest.sh@83 -- # waitforlisten 84141 /var/tmp/bperf.sock 00:17:42.503 04:20:55 -- common/autotest_common.sh@829 -- # '[' -z 84141 ']' 00:17:42.503 04:20:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.503 04:20:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.503 04:20:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.503 04:20:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.503 04:20:55 -- common/autotest_common.sh@10 -- # set +x 00:17:42.503 [2024-12-06 04:20:55.053892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:42.503 [2024-12-06 04:20:55.053998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84141 ] 00:17:42.761 [2024-12-06 04:20:55.190682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.761 [2024-12-06 04:20:55.276541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.695 04:20:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.695 04:20:56 -- common/autotest_common.sh@862 -- # return 0 00:17:43.695 04:20:56 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:43.695 04:20:56 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:43.695 04:20:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:43.954 04:20:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.954 04:20:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.213 nvme0n1 00:17:44.213 04:20:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:44.213 04:20:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.213 Running I/O for 2 seconds... 00:17:46.750 00:17:46.750 Latency(us) 00:17:46.750 [2024-12-06T04:20:59.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.750 [2024-12-06T04:20:59.315Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.750 nvme0n1 : 2.00 16993.98 66.38 0.00 0.00 7525.58 6821.70 14954.12 00:17:46.750 [2024-12-06T04:20:59.315Z] =================================================================================================================== 00:17:46.750 [2024-12-06T04:20:59.315Z] Total : 16993.98 66.38 0.00 0.00 7525.58 6821.70 14954.12 00:17:46.750 0 00:17:46.750 04:20:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:46.750 04:20:58 -- host/digest.sh@92 -- # get_accel_stats 00:17:46.750 04:20:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:46.750 04:20:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:46.750 04:20:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:46.750 | select(.opcode=="crc32c") 00:17:46.750 | "\(.module_name) \(.executed)"' 00:17:46.750 04:20:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:46.750 04:20:58 -- host/digest.sh@93 -- # exp_module=software 00:17:46.750 04:20:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:46.750 04:20:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:46.750 04:20:58 -- host/digest.sh@97 -- # killprocess 84141 00:17:46.750 04:20:58 -- common/autotest_common.sh@936 -- # '[' -z 84141 ']' 00:17:46.750 04:20:58 -- common/autotest_common.sh@940 -- # kill -0 84141 00:17:46.750 04:20:58 -- common/autotest_common.sh@941 -- # uname 00:17:46.750 04:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:46.750 04:20:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84141 00:17:46.750 killing process with pid 84141 00:17:46.750 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.750 00:17:46.750 Latency(us) 00:17:46.750 [2024-12-06T04:20:59.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.750 [2024-12-06T04:20:59.315Z] =================================================================================================================== 00:17:46.750 [2024-12-06T04:20:59.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.750 04:20:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:46.750 04:20:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:46.750 04:20:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84141' 00:17:46.750 04:20:59 -- common/autotest_common.sh@955 -- # kill 84141 00:17:46.750 04:20:59 -- common/autotest_common.sh@960 -- # wait 84141 00:17:46.750 04:20:59 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:46.750 04:20:59 -- host/digest.sh@77 -- # local rw bs qd 00:17:46.750 04:20:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:46.750 04:20:59 -- host/digest.sh@80 -- # rw=randwrite 00:17:46.750 04:20:59 -- host/digest.sh@80 -- # bs=131072 00:17:46.750 04:20:59 -- host/digest.sh@80 -- # qd=16 00:17:46.750 04:20:59 -- host/digest.sh@82 -- # bperfpid=84198 00:17:46.750 04:20:59 -- host/digest.sh@83 -- # waitforlisten 84198 /var/tmp/bperf.sock 00:17:46.750 04:20:59 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:46.750 04:20:59 -- common/autotest_common.sh@829 -- # '[' -z 84198 ']' 00:17:46.750 04:20:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:46.750 04:20:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.750 04:20:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:46.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:46.750 04:20:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.750 04:20:59 -- common/autotest_common.sh@10 -- # set +x 00:17:46.750 [2024-12-06 04:20:59.284535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.750 [2024-12-06 04:20:59.284872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84198 ] 00:17:46.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.750 Zero copy mechanism will not be used. 00:17:47.009 [2024-12-06 04:20:59.424090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.009 [2024-12-06 04:20:59.518673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.945 04:21:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.945 04:21:00 -- common/autotest_common.sh@862 -- # return 0 00:17:47.945 04:21:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:47.945 04:21:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:47.945 04:21:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:48.204 04:21:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.204 04:21:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.463 nvme0n1 00:17:48.463 04:21:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:48.463 04:21:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.463 Zero copy mechanism will not be used. 00:17:48.463 Running I/O for 2 seconds... 00:17:50.996 00:17:50.996 Latency(us) 00:17:50.996 [2024-12-06T04:21:03.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.996 [2024-12-06T04:21:03.561Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:50.996 nvme0n1 : 2.00 6658.10 832.26 0.00 0.00 2397.79 1846.92 6970.65 00:17:50.996 [2024-12-06T04:21:03.561Z] =================================================================================================================== 00:17:50.996 [2024-12-06T04:21:03.561Z] Total : 6658.10 832.26 0.00 0.00 2397.79 1846.92 6970.65 00:17:50.996 0 00:17:50.996 04:21:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:50.996 04:21:03 -- host/digest.sh@92 -- # get_accel_stats 00:17:50.996 04:21:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.996 04:21:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.996 04:21:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.996 | select(.opcode=="crc32c") 00:17:50.996 | "\(.module_name) \(.executed)"' 00:17:50.996 04:21:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:50.996 04:21:03 -- host/digest.sh@93 -- # exp_module=software 00:17:50.996 04:21:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:50.996 04:21:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.996 04:21:03 -- host/digest.sh@97 -- # killprocess 84198 00:17:50.996 04:21:03 -- common/autotest_common.sh@936 -- # '[' -z 84198 ']' 00:17:50.996 04:21:03 -- common/autotest_common.sh@940 -- # kill -0 84198 00:17:50.996 04:21:03 -- common/autotest_common.sh@941 -- # uname 00:17:50.996 04:21:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.996 04:21:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84198 00:17:50.996 killing process with pid 84198 00:17:50.996 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.996 00:17:50.996 Latency(us) 00:17:50.996 [2024-12-06T04:21:03.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.996 [2024-12-06T04:21:03.561Z] =================================================================================================================== 00:17:50.996 [2024-12-06T04:21:03.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.996 04:21:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.996 04:21:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.996 04:21:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84198' 00:17:50.996 04:21:03 -- common/autotest_common.sh@955 -- # kill 84198 00:17:50.996 04:21:03 -- common/autotest_common.sh@960 -- # wait 84198 00:17:50.996 04:21:03 -- host/digest.sh@126 -- # killprocess 83996 00:17:50.996 04:21:03 -- common/autotest_common.sh@936 -- # '[' -z 83996 ']' 00:17:50.996 04:21:03 -- common/autotest_common.sh@940 -- # kill -0 83996 00:17:50.996 04:21:03 -- common/autotest_common.sh@941 -- # uname 00:17:50.996 04:21:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.996 04:21:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83996 00:17:51.255 killing process with pid 83996 00:17:51.255 04:21:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:51.255 04:21:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:51.255 04:21:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83996' 00:17:51.255 04:21:03 -- common/autotest_common.sh@955 -- # kill 83996 00:17:51.255 04:21:03 -- common/autotest_common.sh@960 -- # wait 83996 00:17:51.255 ************************************ 00:17:51.255 END TEST nvmf_digest_clean 00:17:51.255 ************************************ 00:17:51.255 00:17:51.255 real 0m17.975s 00:17:51.255 user 0m34.635s 00:17:51.255 sys 0m4.653s 00:17:51.255 04:21:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:51.255 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.255 04:21:03 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:51.255 04:21:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:51.255 04:21:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.255 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.515 ************************************ 00:17:51.515 START TEST nvmf_digest_error 00:17:51.515 ************************************ 00:17:51.515 04:21:03 -- common/autotest_common.sh@1114 -- # run_digest_error 00:17:51.515 04:21:03 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:51.515 04:21:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.515 04:21:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.515 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.515 04:21:03 -- nvmf/common.sh@469 -- # nvmfpid=84287 00:17:51.515 04:21:03 -- nvmf/common.sh@470 -- # waitforlisten 84287 00:17:51.515 04:21:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:51.515 04:21:03 -- common/autotest_common.sh@829 -- # '[' -z 84287 ']' 00:17:51.515 04:21:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.515 04:21:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.515 04:21:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.515 04:21:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.515 04:21:03 -- common/autotest_common.sh@10 -- # set +x 00:17:51.515 [2024-12-06 04:21:03.876604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.515 [2024-12-06 04:21:03.876687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.515 [2024-12-06 04:21:04.012185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.774 [2024-12-06 04:21:04.103984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.774 [2024-12-06 04:21:04.104130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.774 [2024-12-06 04:21:04.104145] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.774 [2024-12-06 04:21:04.104154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.774 [2024-12-06 04:21:04.104180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.366 04:21:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.366 04:21:04 -- common/autotest_common.sh@862 -- # return 0 00:17:52.366 04:21:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:52.366 04:21:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:52.366 04:21:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.366 04:21:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.366 04:21:04 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:52.366 04:21:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.366 04:21:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.366 [2024-12-06 04:21:04.900750] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:52.366 04:21:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.366 04:21:04 -- host/digest.sh@104 -- # common_target_config 00:17:52.366 04:21:04 -- host/digest.sh@43 -- # rpc_cmd 00:17:52.366 04:21:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.366 04:21:04 -- common/autotest_common.sh@10 -- # set +x 00:17:52.625 null0 00:17:52.625 [2024-12-06 04:21:05.010438] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.625 [2024-12-06 04:21:05.034633] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.625 04:21:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.625 04:21:05 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:52.625 04:21:05 -- host/digest.sh@54 -- # local rw bs qd 00:17:52.625 04:21:05 -- host/digest.sh@56 -- # rw=randread 00:17:52.625 04:21:05 -- host/digest.sh@56 -- # bs=4096 00:17:52.625 04:21:05 -- host/digest.sh@56 -- # qd=128 00:17:52.625 04:21:05 -- host/digest.sh@58 -- # bperfpid=84319 00:17:52.625 04:21:05 -- host/digest.sh@60 -- # waitforlisten 84319 /var/tmp/bperf.sock 00:17:52.625 04:21:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:52.625 04:21:05 -- common/autotest_common.sh@829 -- # '[' -z 84319 ']' 00:17:52.625 04:21:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:52.625 04:21:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.625 04:21:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:52.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:52.625 04:21:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.625 04:21:05 -- common/autotest_common.sh@10 -- # set +x 00:17:52.625 [2024-12-06 04:21:05.092947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:52.625 [2024-12-06 04:21:05.093334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84319 ] 00:17:52.883 [2024-12-06 04:21:05.226924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.883 [2024-12-06 04:21:05.318451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.817 04:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.817 04:21:06 -- common/autotest_common.sh@862 -- # return 0 00:17:53.817 04:21:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.817 04:21:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.817 04:21:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:53.817 04:21:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.817 04:21:06 -- common/autotest_common.sh@10 -- # set +x 00:17:53.817 04:21:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.817 04:21:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.817 04:21:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.385 nvme0n1 00:17:54.385 04:21:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:54.385 04:21:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.385 04:21:06 -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 04:21:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.385 04:21:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:54.385 04:21:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:54.385 Running I/O for 2 seconds... 00:17:54.385 [2024-12-06 04:21:06.834316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.385 [2024-12-06 04:21:06.834626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.385 [2024-12-06 04:21:06.834776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.385 [2024-12-06 04:21:06.850183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.385 [2024-12-06 04:21:06.850413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.385 [2024-12-06 04:21:06.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.385 [2024-12-06 04:21:06.866333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.866580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.866602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.386 [2024-12-06 04:21:06.881978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.882017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.882030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.386 [2024-12-06 04:21:06.896991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.897189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.897206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.386 [2024-12-06 04:21:06.912151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.912188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.912218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.386 [2024-12-06 04:21:06.927465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.927500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.927529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.386 [2024-12-06 04:21:06.943241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.386 [2024-12-06 04:21:06.943457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.386 [2024-12-06 04:21:06.943475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:06.959485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:06.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:06.959548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:06.974517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:06.974732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:06.974751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:06.990037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:06.990074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:06.990103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.006525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.006591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.006605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.023738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.023968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.040625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.040667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.040681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.057422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.057647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.057667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.073337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.073562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.073580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.090381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.090432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.090465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.106928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.123855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.124013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.124030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.141286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.141343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.158170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.158376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.158425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.174778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.174833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.190462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.645 [2024-12-06 04:21:07.190497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.645 [2024-12-06 04:21:07.190510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.645 [2024-12-06 04:21:07.206438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.646 [2024-12-06 04:21:07.206504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.646 [2024-12-06 04:21:07.206535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.222907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.222946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.222977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.239706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.239893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.256470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.256507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.272452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.272489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.272519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.287564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.287600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.287630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.302684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.302904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.302921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.318285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.318494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.318511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.333497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.333535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.333565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.349204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.349241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.349272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.364285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.364506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.364523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.379624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.379826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.379843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.394897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.395051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.395068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.410277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.410496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.410513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.426330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.426366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.426396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.441429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.441464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.441495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.906 [2024-12-06 04:21:07.456553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:54.906 [2024-12-06 04:21:07.456588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.906 [2024-12-06 04:21:07.456618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.472378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.472441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.472471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.487987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.488025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.488038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.504962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.505000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.505030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.520851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.521042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.521059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.536992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.537028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.537057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.552141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.552178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.552222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.567119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.567306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.582617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.582656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.582670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.597561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.597752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.597769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.613022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.613206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.613224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.628270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.628467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.628484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.643476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.643659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.643676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.659507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.659544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.676310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.676345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.166 [2024-12-06 04:21:07.676374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.166 [2024-12-06 04:21:07.693640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.166 [2024-12-06 04:21:07.693677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.167 [2024-12-06 04:21:07.693706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.167 [2024-12-06 04:21:07.708754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.167 [2024-12-06 04:21:07.708806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.167 [2024-12-06 04:21:07.708835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.167 [2024-12-06 04:21:07.723736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.167 [2024-12-06 04:21:07.723770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.167 [2024-12-06 04:21:07.723799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.741240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.741276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.741290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.759543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.759580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.759594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.777087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.777124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.777138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.794614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.794653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.794667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.811840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.811998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.829033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.829072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.829101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.851901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.851937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.851967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.867874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.867910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.867939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.882965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.883169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.883188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.898295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.898331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.898360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.913521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.913558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.913587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.928587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.928621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.928650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.944485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.944520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.944548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.959590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.959653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.426 [2024-12-06 04:21:07.974573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.426 [2024-12-06 04:21:07.974768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.426 [2024-12-06 04:21:07.974786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:07.990445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:07.990488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:07.990518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.006078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.006113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.006141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.021200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.021238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.021267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.036276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.036311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.036340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.052321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.052355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.052384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.067006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.067200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.067218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.082311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.082532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.098494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.098530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.098585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.114514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.114589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.114620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.129593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.129629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.129658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.144852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.144888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.144918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.161386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.161640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.161657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.178098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.178152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.178180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.194227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.194290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.210183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.210217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.210246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.226894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.226935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.226950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.686 [2024-12-06 04:21:08.242722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.686 [2024-12-06 04:21:08.242761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.686 [2024-12-06 04:21:08.242774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.259165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.259201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.259213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.274861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.275069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.290566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.290605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.290619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.307076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.307117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.307131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.324295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.324348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.340488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.340524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.340536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.356225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.356263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.356275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.371881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.371918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.371930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.387634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.387672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.387701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.403157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.403340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.403358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.419189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.419229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.419242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.434400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.434632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.434650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.449950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.450138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.450155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.466328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.466364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.466394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.481337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.481372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.481413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.946 [2024-12-06 04:21:08.496504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:55.946 [2024-12-06 04:21:08.496538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.946 [2024-12-06 04:21:08.496566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.512366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.512431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.512466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.527664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.527700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.527729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.542670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.542898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.542915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.558093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.558331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.558348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.575668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.575705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.592700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.592874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.592893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.609788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.609843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.609857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.626616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.206 [2024-12-06 04:21:08.626654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.206 [2024-12-06 04:21:08.626668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.206 [2024-12-06 04:21:08.643988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.644028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.644043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.660124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.660315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.660332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.676155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.676194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.676207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.693221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.693285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.709736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.709988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.726413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.726449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.726478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.741246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.741281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.741311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.207 [2024-12-06 04:21:08.756191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.207 [2024-12-06 04:21:08.756226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.207 [2024-12-06 04:21:08.756254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.466 [2024-12-06 04:21:08.771795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.466 [2024-12-06 04:21:08.771830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.466 [2024-12-06 04:21:08.771861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.466 [2024-12-06 04:21:08.786803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.466 [2024-12-06 04:21:08.786997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.466 [2024-12-06 04:21:08.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.466 [2024-12-06 04:21:08.801913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.466 [2024-12-06 04:21:08.802099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.466 [2024-12-06 04:21:08.802116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.466 [2024-12-06 04:21:08.816613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1db20b0) 00:17:56.466 [2024-12-06 04:21:08.816798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.466 [2024-12-06 04:21:08.816816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.466 00:17:56.466 Latency(us) 00:17:56.466 [2024-12-06T04:21:09.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.466 [2024-12-06T04:21:09.031Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:56.467 nvme0n1 : 2.01 15865.82 61.98 0.00 0.00 8061.28 6851.49 30742.34 00:17:56.467 [2024-12-06T04:21:09.032Z] =================================================================================================================== 00:17:56.467 [2024-12-06T04:21:09.032Z] Total : 15865.82 61.98 0.00 0.00 8061.28 6851.49 30742.34 00:17:56.467 0 00:17:56.467 04:21:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:56.467 04:21:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:56.467 04:21:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:56.467 | .driver_specific 00:17:56.467 | .nvme_error 00:17:56.467 | .status_code 00:17:56.467 | .command_transient_transport_error' 00:17:56.467 04:21:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:56.727 04:21:09 -- host/digest.sh@71 -- # (( 125 > 0 )) 00:17:56.727 04:21:09 -- host/digest.sh@73 -- # killprocess 84319 00:17:56.727 04:21:09 -- common/autotest_common.sh@936 -- # '[' -z 84319 ']' 00:17:56.727 04:21:09 -- common/autotest_common.sh@940 -- # kill -0 84319 00:17:56.727 04:21:09 -- common/autotest_common.sh@941 -- # uname 00:17:56.727 04:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.727 04:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84319 00:17:56.727 killing process with pid 84319 00:17:56.727 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.727 00:17:56.727 Latency(us) 00:17:56.727 [2024-12-06T04:21:09.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.727 [2024-12-06T04:21:09.292Z] =================================================================================================================== 00:17:56.727 [2024-12-06T04:21:09.292Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.727 04:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:56.727 04:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:56.727 04:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84319' 00:17:56.727 04:21:09 -- common/autotest_common.sh@955 -- # kill 84319 00:17:56.727 04:21:09 -- common/autotest_common.sh@960 -- # wait 84319 00:17:56.986 04:21:09 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:56.986 04:21:09 -- host/digest.sh@54 -- # local rw bs qd 00:17:56.986 04:21:09 -- host/digest.sh@56 -- # rw=randread 00:17:56.986 04:21:09 -- host/digest.sh@56 -- # bs=131072 00:17:56.986 04:21:09 -- host/digest.sh@56 -- # qd=16 00:17:56.986 04:21:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:56.986 04:21:09 -- host/digest.sh@58 -- # bperfpid=84379 00:17:56.986 04:21:09 -- host/digest.sh@60 -- # waitforlisten 84379 /var/tmp/bperf.sock 00:17:56.986 04:21:09 -- common/autotest_common.sh@829 -- # '[' -z 84379 ']' 00:17:56.986 04:21:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:56.986 04:21:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.986 04:21:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:56.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:56.986 04:21:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.986 04:21:09 -- common/autotest_common.sh@10 -- # set +x 00:17:56.986 [2024-12-06 04:21:09.458158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:56.986 [2024-12-06 04:21:09.458453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84379 ] 00:17:56.986 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:56.986 Zero copy mechanism will not be used. 00:17:57.245 [2024-12-06 04:21:09.594762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.245 [2024-12-06 04:21:09.677034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.183 04:21:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.183 04:21:10 -- common/autotest_common.sh@862 -- # return 0 00:17:58.183 04:21:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.183 04:21:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.183 04:21:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:58.183 04:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.183 04:21:10 -- common/autotest_common.sh@10 -- # set +x 00:17:58.183 04:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.183 04:21:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.183 04:21:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.442 nvme0n1 00:17:58.702 04:21:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:58.702 04:21:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.702 04:21:11 -- common/autotest_common.sh@10 -- # set +x 00:17:58.702 04:21:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.702 04:21:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:58.702 04:21:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:58.702 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:58.702 Zero copy mechanism will not be used. 00:17:58.702 Running I/O for 2 seconds... 00:17:58.702 [2024-12-06 04:21:11.123778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.123846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.123877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.127912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.127951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.127981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.132091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.132131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.132160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.136262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.136301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.136332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.140339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.140377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.144454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.148447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.148514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.152436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.152473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.152502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.156468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.156504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.160479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.160516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.160545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.164546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.164583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.164613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.168635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.168672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.168701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.172741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.172778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.702 [2024-12-06 04:21:11.172807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.702 [2024-12-06 04:21:11.176849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.702 [2024-12-06 04:21:11.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.176916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.180905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.180942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.180972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.184986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.185053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.189110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.189148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.189177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.193235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.193301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.197212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.197280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.201641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.201696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.201734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.206387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.206470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.210889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.210931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.210963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.215288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.215327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.215357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.219868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.219926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.224264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.224302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.224331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.228656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.228732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.228755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.233083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.233136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.233165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.237507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.237542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.237572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.241957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.241996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.242026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.246342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.246381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.246419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.250792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.250834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.250849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.255128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.255166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.255194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.259359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.259456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.259488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.703 [2024-12-06 04:21:11.263788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.703 [2024-12-06 04:21:11.263825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.703 [2024-12-06 04:21:11.263854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.268301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.268499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.268518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.273002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.273042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.273056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.277573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.277613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.277643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.282072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.282110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.282140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.286466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.286518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.286533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.290913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.290953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.290982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.295279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.295317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.295347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.299645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.299683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.299713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.304012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.304066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.304080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.308202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.308241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.308272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.312437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.312474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.312503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.316543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.316581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.316609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.320832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.320871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.320900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.324961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.325014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.329168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.329207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.329237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.333544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.333582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.333611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.337704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.337759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.337790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.341954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.341993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.342023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.346173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.346212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.346242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.350317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.350357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.964 [2024-12-06 04:21:11.354374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.964 [2024-12-06 04:21:11.354423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.964 [2024-12-06 04:21:11.354452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.358840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.358882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.358897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.363301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.363358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.367936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.367976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.368006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.372254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.372292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.372322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.376526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.376565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.376579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.380619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.380658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.380673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.384800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.384838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.384851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.389045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.389083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.389112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.393274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.393313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.393343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.397483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.397536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.397566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.401885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.401924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.401955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.405997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.406035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.406064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.410162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.410200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.410231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.414274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.414314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.414344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.418496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.418534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.418590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.422679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.422721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.422735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.427087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.427129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.427163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.431282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.431320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.431349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.435574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.435611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.435641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.439682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.439718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.443770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.443821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.447964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.448003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.448033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.452231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.452269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.452298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.456504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.456542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.456571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.461062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.461102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.461132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.465718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.465783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.465810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.965 [2024-12-06 04:21:11.469957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.965 [2024-12-06 04:21:11.469996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-12-06 04:21:11.470026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.474013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.474051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.474080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.478138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.478176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.478205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.482132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.482170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.482200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.486159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.486199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.490218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.490257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.490287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.494402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.494438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.494468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.498604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.498645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.498659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.502712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.502753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.502767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.506980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.507034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.507063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.511077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.511143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.515162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.515200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.515229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.519254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.519292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.966 [2024-12-06 04:21:11.523603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:58.966 [2024-12-06 04:21:11.523641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.966 [2024-12-06 04:21:11.523671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.226 [2024-12-06 04:21:11.528024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.528062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.532480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.532518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.536683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.536720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.536750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.540800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.540838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.540868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.545033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.545070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.545100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.549119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.549157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.549187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.553248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.553315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.557377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.557424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.557455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.561521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.561558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.561588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.565564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.565601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.565630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.569590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.569627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.569656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.573615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.573652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.573685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.577607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.577644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.577677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.581684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.581728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.581759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.585724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.585765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.585795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.589772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.589810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.589840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.593914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.593954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.593984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.597963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.598001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.598030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.602028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.602066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.602096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.606120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.606158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.606188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.610187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.610225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.614234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.614273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.618339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.618377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.618421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.622477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.622514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.622572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.626727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.626767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.626780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.630978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.631031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.227 [2024-12-06 04:21:11.631060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.227 [2024-12-06 04:21:11.635283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.227 [2024-12-06 04:21:11.635321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.635350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.639398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.639464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.643555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.643593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.643622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.647654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.647691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.647720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.652147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.652186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.652215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.656449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.656487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.656516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.660739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.660776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.660805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.664942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.664980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.665010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.669091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.669129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.669158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.673643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.673691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.673714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.678122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.678161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.678191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.682207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.682245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.682274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.686313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.686353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.686382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.690515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.690593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.690608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.694724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.694762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.694777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.698865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.698918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.698947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.703132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.703171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.703200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.707395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.707456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.707471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.711526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.711564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.711593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.715736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.715773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.715803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.720266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.720337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.724904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.724947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.724962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.729263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.729303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.729317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.733762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.733807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.733822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.738215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.738257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.738271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.742739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.742782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.742797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.747293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.747336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.228 [2024-12-06 04:21:11.747351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.228 [2024-12-06 04:21:11.751747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.228 [2024-12-06 04:21:11.751917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.756211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.756252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.756266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.760645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.760687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.760702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.765111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.765165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.769602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.769641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.773891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.773931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.773944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.777937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.777976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.777989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.229 [2024-12-06 04:21:11.782228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.229 [2024-12-06 04:21:11.782268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.229 [2024-12-06 04:21:11.782281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.787329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.787559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.787580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.791986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.792029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.792043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.796509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.796549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.796562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.800582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.800621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.800634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.804700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.804740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.804753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.808985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.809025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.809039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.813220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.489 [2024-12-06 04:21:11.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.489 [2024-12-06 04:21:11.813273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.489 [2024-12-06 04:21:11.817439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.817477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.817490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.821561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.821598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.821612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.825608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.825646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.825662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.829756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.829810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.829825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.833922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.833962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.833976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.837988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.838028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.838042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.842256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.842312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.842326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.846467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.846505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.846520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.850458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.850495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.850509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.854809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.854851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.854866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.859068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.859107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.859121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.863153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.863206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.867422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.867504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.871743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.871781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.871811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.876002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.876055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.876086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.880245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.880301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.880332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.884501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.884538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.884570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.888533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.888571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.888601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.892662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.892708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.892741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.896737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.896775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.896807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.901109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.901148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.901178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.906967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.907043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.912831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.912888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.912930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.918773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.918866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.924037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.924111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.929662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.929732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.929772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.935475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.490 [2024-12-06 04:21:11.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.490 [2024-12-06 04:21:11.935640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.490 [2024-12-06 04:21:11.941077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.941323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.941351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.946889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.946945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.946983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.952236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.952290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.952328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.957621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.957675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.957714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.963005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.963076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.963108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.968434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.968487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.973812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.973868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.973909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.979697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.979769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.979825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.985655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.985711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.985750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.990918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.990990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.991028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:11.996208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:11.996280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:11.996303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.001156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.001434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.001462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.006755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.006833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.006873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.012062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.012116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.012154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.017330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.017600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.017625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.022965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.023073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.028298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.028393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.033535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.033608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.033631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.037790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.037833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.037865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.041941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.041980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.042011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.046093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.046132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.046164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.491 [2024-12-06 04:21:12.050504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.491 [2024-12-06 04:21:12.050550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.491 [2024-12-06 04:21:12.050581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.752 [2024-12-06 04:21:12.054727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.752 [2024-12-06 04:21:12.054768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.752 [2024-12-06 04:21:12.054782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.752 [2024-12-06 04:21:12.059223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.752 [2024-12-06 04:21:12.059261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.059291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.063452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.063501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.063531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.067608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.067645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.067676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.071654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.071691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.071720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.075815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.075852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.075883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.080017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.080054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.080084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.084074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.084112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.084143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.088144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.088182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.088212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.092306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.092344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.092375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.096464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.096500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.096531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.100551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.100586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.100617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.104572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.104609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.104640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.108644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.108729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.112703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.112739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.112769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.116717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.116753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.120797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.120834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.125042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.125080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.125111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.129280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.129316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.133449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.133486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.133517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.137534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.137601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.141544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.141580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.141611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.145674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.145710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.145740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.149732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.149768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.149798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.153790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.153826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.153857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.157898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.157934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.157964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.162033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.162070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.162099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.753 [2024-12-06 04:21:12.166104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.753 [2024-12-06 04:21:12.166141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.753 [2024-12-06 04:21:12.166173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.170189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.170227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.170258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.174299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.174335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.174366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.178362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.178439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.182370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.182452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.182466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.186576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.186614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.186628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.191030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.191065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.191094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.195536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.195572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.195603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.199679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.199716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.199746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.203903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.203940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.203969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.208069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.208106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.208136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.212444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.212492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.212541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.216820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.216874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.221328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.221367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.225710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.225750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.225764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.230204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.230242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.230255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.234725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.234764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.234779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.239384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.239435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.243885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.243940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.248257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.248294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.248307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.252575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.252611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.252624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.256810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.256849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.256863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.261088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.261125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.261139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.265073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.265110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.265123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.269152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.269189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.269203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.273132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.273169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.273182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.277272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.277310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.277323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.754 [2024-12-06 04:21:12.281223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.754 [2024-12-06 04:21:12.281260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.754 [2024-12-06 04:21:12.281273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.285406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.285442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.285454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.289416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.289452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.289464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.293433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.293470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.293483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.297466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.297503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.297516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.301376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.301423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.301436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.305301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.305338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.305351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:59.755 [2024-12-06 04:21:12.309460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:17:59.755 [2024-12-06 04:21:12.309498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.755 [2024-12-06 04:21:12.309512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.313902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.314112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.314130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.318226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.318264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.318278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.322848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.322919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.322933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.327067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.327105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.327118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.331114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.335180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.335218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.335231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.339290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.339342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.343438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.343486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.343499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.347370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.347418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.347449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.351481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.351518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.016 [2024-12-06 04:21:12.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.016 [2024-12-06 04:21:12.355566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.016 [2024-12-06 04:21:12.355602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.355615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.359585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.359622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.359636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.363569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.363604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.363617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.367444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.367660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.367677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.371687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.371725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.371738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.375774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.375812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.375825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.379907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.379946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.379959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.383924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.383961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.383974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.388045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.388096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.391974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.392012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.392025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.395940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.395976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.395990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.399849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.399886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.403941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.403978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.403991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.407960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.407997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.408009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.411983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.412019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.412032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.416026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.416063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.420144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.420180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.420211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.424196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.424250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.424263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.428514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.432830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.432868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.437295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.437333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.437347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.441704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.441743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.441758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.446169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.446207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.446238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.450618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.450658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.450672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.455062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.455103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.455134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.459622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.459680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.459694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.017 [2024-12-06 04:21:12.464101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.017 [2024-12-06 04:21:12.464138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.017 [2024-12-06 04:21:12.464168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.468624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.468676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.468690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.473001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.473056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.473101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.477493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.477530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.477560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.481948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.481998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.482012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.486330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.486367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.486396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.490753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.490793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.490808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.495188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.495226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.495256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.499624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.499679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.499694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.504134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.504172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.504185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.508482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.508519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.508532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.512987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.513083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.517375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.517434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.517447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.521738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.521775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.526144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.526180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.526193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.530581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.530616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.534970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.535049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.535062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.539396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.539454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.539466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.543585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.543632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.543644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.547799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.547848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.547860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.552074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.552123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.552136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.556406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.556454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.556467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.560578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.560640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.564920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.564956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.564969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.569136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.569187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.569199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.573437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.573486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.573499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.018 [2024-12-06 04:21:12.577883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.018 [2024-12-06 04:21:12.577920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.018 [2024-12-06 04:21:12.577933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.279 [2024-12-06 04:21:12.581948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.279 [2024-12-06 04:21:12.581998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.279 [2024-12-06 04:21:12.582011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.279 [2024-12-06 04:21:12.586410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.279 [2024-12-06 04:21:12.586473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.279 [2024-12-06 04:21:12.586486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.279 [2024-12-06 04:21:12.590503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.279 [2024-12-06 04:21:12.590576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.279 [2024-12-06 04:21:12.590602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.279 [2024-12-06 04:21:12.594972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.279 [2024-12-06 04:21:12.595020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.595032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.599259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.599308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.599321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.603354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.603427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.607471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.607502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.607531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.611598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.611659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.615714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.615774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.619857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.619906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.619918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.624067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.624115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.624127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.628248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.628296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.628308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.632530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.632574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.632586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.636716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.636763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.636775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.640819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.640867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.640879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.644940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.644987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.644999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.649065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.649114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.649126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.653256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.653304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.653315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.657402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.657448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.657460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.661480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.661526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.661538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.665559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.665606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.665618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.669671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.669718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.669730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.673805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.673853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.673865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.677821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.677869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.677882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.681890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.681938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.681949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.685979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.686027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.686039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.690206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.690256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.690269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.694292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.694340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.694353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.698334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.280 [2024-12-06 04:21:12.698382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.280 [2024-12-06 04:21:12.698394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.280 [2024-12-06 04:21:12.702276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.702324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.702336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.706425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.706473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.706485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.710461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.710508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.714536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.714608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.714620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.718607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.718647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.722805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.722840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.722853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.727084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.727132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.727144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.731222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.731270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.731281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.735424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.735483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.735495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.739555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.739602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.739614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.743785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.743845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.747985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.748034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.748064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.752140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.752188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.752200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.756769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.756803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.756816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.761232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.761267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.761279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.765729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.765763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.765796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.770203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.770253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.770266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.774757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.774794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.774807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.779198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.779259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.783731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.783779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.788317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.788365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.792823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.792871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.792883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.797204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.797254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.797266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.801459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.801505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.801518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.805661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.805710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.805723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.809843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.809891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.281 [2024-12-06 04:21:12.809905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.281 [2024-12-06 04:21:12.814004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.281 [2024-12-06 04:21:12.814036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.814048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.818133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.818181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.818193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.822257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.826381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.826438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.826450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.830406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.830463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.830475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.834518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.834619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.282 [2024-12-06 04:21:12.838826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.282 [2024-12-06 04:21:12.838860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.282 [2024-12-06 04:21:12.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.843256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.843303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.843315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.847472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.847531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.847545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.851680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.851727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.851739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.855943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.855991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.856003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.860097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.860146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.860158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.864320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.864368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.864380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.868409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.868456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.868468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.872624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.872684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.872697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.877027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.877061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.881313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.881364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.881378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.885771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.885805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.885818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.890203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.890235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.890247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.894616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.894649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.894662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.899075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.899108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.899120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.543 [2024-12-06 04:21:12.903411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.543 [2024-12-06 04:21:12.903458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.543 [2024-12-06 04:21:12.903471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.907790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.907825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.907839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.911934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.911969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.911981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.916094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.916130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.916142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.920352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.920395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.920409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.924482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.924513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.924525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.928591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.928625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.928637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.932885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.932918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.932930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.937020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.937054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.937066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.941142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.941177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.941189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.945517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.945549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.949651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.949686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.949699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.953792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.953826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.953839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.958238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.958272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.958285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.962410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.962451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.962463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.966453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.966485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.966497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.970807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.970842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.970854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.974997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.975032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.975044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.979189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.979234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.983432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.983474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.983486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.987506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.987540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.987552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.991746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.991780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.991793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:12.995907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:12.995942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:12.995954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:13.000027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:13.000060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:13.000072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:13.004522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:13.004556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:13.004569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:13.008837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:13.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:13.008886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:13.013275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:13.013309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:13.013322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.544 [2024-12-06 04:21:13.017900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.544 [2024-12-06 04:21:13.017934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.544 [2024-12-06 04:21:13.017947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.022167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.022218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.022229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.026643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.026681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.026694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.030914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.030963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.030975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.035235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.035283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.035296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.039504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.039551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.039564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.043808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.043857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.043869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.048180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.048230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.048242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.052430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.052476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.052488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.056608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.056656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.056668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.060799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.060846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.060859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.064918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.064965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.064977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.069199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.069248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.069261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.073349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.073421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.077500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.077548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.077560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.081622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.081670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.081681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.085697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.085744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.085756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.089856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.089905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.089917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.094013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.094061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.094073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.098169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.098218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.098230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.545 [2024-12-06 04:21:13.102455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.545 [2024-12-06 04:21:13.102503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.545 [2024-12-06 04:21:13.102516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.805 [2024-12-06 04:21:13.106781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.805 [2024-12-06 04:21:13.106817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.805 [2024-12-06 04:21:13.106831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:00.805 [2024-12-06 04:21:13.111039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.805 [2024-12-06 04:21:13.111089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.805 [2024-12-06 04:21:13.111101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.805 [2024-12-06 04:21:13.115202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.805 [2024-12-06 04:21:13.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.805 [2024-12-06 04:21:13.115263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.805 [2024-12-06 04:21:13.119319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe22680) 00:18:00.805 [2024-12-06 04:21:13.119367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.805 [2024-12-06 04:21:13.119379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.805 00:18:00.805 Latency(us) 00:18:00.805 [2024-12-06T04:21:13.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.805 [2024-12-06T04:21:13.370Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:00.805 nvme0n1 : 2.00 7203.96 900.49 0.00 0.00 2217.64 1765.00 6225.92 00:18:00.805 [2024-12-06T04:21:13.370Z] =================================================================================================================== 00:18:00.805 [2024-12-06T04:21:13.370Z] Total : 7203.96 900.49 0.00 0.00 2217.64 1765.00 6225.92 00:18:00.805 0 00:18:00.805 04:21:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:00.805 04:21:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:00.805 04:21:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:00.805 | .driver_specific 00:18:00.805 | .nvme_error 00:18:00.805 | .status_code 00:18:00.805 | .command_transient_transport_error' 00:18:00.805 04:21:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:01.064 04:21:13 -- host/digest.sh@71 -- # (( 465 > 0 )) 00:18:01.064 04:21:13 -- host/digest.sh@73 -- # killprocess 84379 00:18:01.064 04:21:13 -- common/autotest_common.sh@936 -- # '[' -z 84379 ']' 00:18:01.064 04:21:13 -- common/autotest_common.sh@940 -- # kill -0 84379 00:18:01.064 04:21:13 -- common/autotest_common.sh@941 -- # uname 00:18:01.064 04:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.064 04:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84379 00:18:01.064 04:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.064 04:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.064 04:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84379' 00:18:01.064 killing process with pid 84379 00:18:01.064 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.064 00:18:01.064 Latency(us) 00:18:01.064 [2024-12-06T04:21:13.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.064 [2024-12-06T04:21:13.629Z] =================================================================================================================== 00:18:01.064 [2024-12-06T04:21:13.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.064 04:21:13 -- common/autotest_common.sh@955 -- # kill 84379 00:18:01.064 04:21:13 -- common/autotest_common.sh@960 -- # wait 84379 00:18:01.323 04:21:13 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:18:01.323 04:21:13 -- host/digest.sh@54 -- # local rw bs qd 00:18:01.323 04:21:13 -- host/digest.sh@56 -- # rw=randwrite 00:18:01.323 04:21:13 -- host/digest.sh@56 -- # bs=4096 00:18:01.323 04:21:13 -- host/digest.sh@56 -- # qd=128 00:18:01.323 04:21:13 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:01.323 04:21:13 -- host/digest.sh@58 -- # bperfpid=84438 00:18:01.323 04:21:13 -- host/digest.sh@60 -- # waitforlisten 84438 /var/tmp/bperf.sock 00:18:01.323 04:21:13 -- common/autotest_common.sh@829 -- # '[' -z 84438 ']' 00:18:01.323 04:21:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.323 04:21:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.323 04:21:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.323 04:21:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.323 04:21:13 -- common/autotest_common.sh@10 -- # set +x 00:18:01.324 [2024-12-06 04:21:13.712415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.324 [2024-12-06 04:21:13.712510] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84438 ] 00:18:01.324 [2024-12-06 04:21:13.847590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.582 [2024-12-06 04:21:13.929665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.149 04:21:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.149 04:21:14 -- common/autotest_common.sh@862 -- # return 0 00:18:02.149 04:21:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.149 04:21:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.408 04:21:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:02.408 04:21:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.408 04:21:14 -- common/autotest_common.sh@10 -- # set +x 00:18:02.408 04:21:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.408 04:21:14 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.408 04:21:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.978 nvme0n1 00:18:02.978 04:21:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:02.978 04:21:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.978 04:21:15 -- common/autotest_common.sh@10 -- # set +x 00:18:02.978 04:21:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.978 04:21:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:02.978 04:21:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:02.978 Running I/O for 2 seconds... 00:18:02.978 [2024-12-06 04:21:15.442457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ddc00 00:18:02.979 [2024-12-06 04:21:15.443918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.443961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.457503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:02.979 [2024-12-06 04:21:15.458951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.458991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.472422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ff3c8 00:18:02.979 [2024-12-06 04:21:15.473763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.473798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.487214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190feb58 00:18:02.979 [2024-12-06 04:21:15.488627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.488664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.502394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fe720 00:18:02.979 [2024-12-06 04:21:15.503808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.503844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.518734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fe2e8 00:18:02.979 [2024-12-06 04:21:15.520118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.520154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:02.979 [2024-12-06 04:21:15.533709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fdeb0 00:18:02.979 [2024-12-06 04:21:15.535122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.979 [2024-12-06 04:21:15.535158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.550022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fda78 00:18:03.241 [2024-12-06 04:21:15.551332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.551369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.565362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fd640 00:18:03.241 [2024-12-06 04:21:15.566692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.566730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.580552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fd208 00:18:03.241 [2024-12-06 04:21:15.581796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.595362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fcdd0 00:18:03.241 [2024-12-06 04:21:15.596779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.596809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.611230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fc998 00:18:03.241 [2024-12-06 04:21:15.612545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.612580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.626142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fc560 00:18:03.241 [2024-12-06 04:21:15.627650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.627702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.641191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fc128 00:18:03.241 [2024-12-06 04:21:15.642639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.642677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.656265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fbcf0 00:18:03.241 [2024-12-06 04:21:15.657553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.657587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.671974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fb8b8 00:18:03.241 [2024-12-06 04:21:15.673133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.673184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.686692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fb480 00:18:03.241 [2024-12-06 04:21:15.688103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.241 [2024-12-06 04:21:15.688141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:03.241 [2024-12-06 04:21:15.701836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fb048 00:18:03.242 [2024-12-06 04:21:15.703290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.717185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fac10 00:18:03.242 [2024-12-06 04:21:15.718608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.718643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.733121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fa7d8 00:18:03.242 [2024-12-06 04:21:15.734299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.734348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.748266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fa3a0 00:18:03.242 [2024-12-06 04:21:15.749459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.749687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.763362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f9f68 00:18:03.242 [2024-12-06 04:21:15.764710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.764753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.778474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f9b30 00:18:03.242 [2024-12-06 04:21:15.779688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.779737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:03.242 [2024-12-06 04:21:15.793389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f96f8 00:18:03.242 [2024-12-06 04:21:15.794705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.242 [2024-12-06 04:21:15.794737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.808807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f92c0 00:18:03.500 [2024-12-06 04:21:15.810175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.810226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.824564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f8e88 00:18:03.500 [2024-12-06 04:21:15.825672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.825713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.840628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f8a50 00:18:03.500 [2024-12-06 04:21:15.841800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.841850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.856591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f8618 00:18:03.500 [2024-12-06 04:21:15.857717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.857783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.872153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f81e0 00:18:03.500 [2024-12-06 04:21:15.873246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.873297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.887271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f7da8 00:18:03.500 [2024-12-06 04:21:15.888360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.888415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.902051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f7970 00:18:03.500 [2024-12-06 04:21:15.903122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.903347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.918319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f7538 00:18:03.500 [2024-12-06 04:21:15.919401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.919452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.934213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f7100 00:18:03.500 [2024-12-06 04:21:15.935485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.935522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.949845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f6cc8 00:18:03.500 [2024-12-06 04:21:15.950891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.950927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.965265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f6890 00:18:03.500 [2024-12-06 04:21:15.966425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.966461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.981416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f6458 00:18:03.500 [2024-12-06 04:21:15.982640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.982677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:15.997709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f6020 00:18:03.500 [2024-12-06 04:21:15.998762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:15.998796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:16.013548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f5be8 00:18:03.500 [2024-12-06 04:21:16.014558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:16.014592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:16.029744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f57b0 00:18:03.500 [2024-12-06 04:21:16.030805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:16.030876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:16.045119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f5378 00:18:03.500 [2024-12-06 04:21:16.046258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:16.046289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:03.500 [2024-12-06 04:21:16.060616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f4f40 00:18:03.500 [2024-12-06 04:21:16.061599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.500 [2024-12-06 04:21:16.061651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.075731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f4b08 00:18:03.758 [2024-12-06 04:21:16.076668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.076705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.090290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f46d0 00:18:03.758 [2024-12-06 04:21:16.091297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.091333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.104812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f4298 00:18:03.758 [2024-12-06 04:21:16.105720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.105757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.119320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f3e60 00:18:03.758 [2024-12-06 04:21:16.120400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.120451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.134474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f3a28 00:18:03.758 [2024-12-06 04:21:16.135412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.135619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.149143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f35f0 00:18:03.758 [2024-12-06 04:21:16.150021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.150058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.163714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f31b8 00:18:03.758 [2024-12-06 04:21:16.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.164620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.179070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f2d80 00:18:03.758 [2024-12-06 04:21:16.179927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.179962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:03.758 [2024-12-06 04:21:16.193527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f2948 00:18:03.758 [2024-12-06 04:21:16.194348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.758 [2024-12-06 04:21:16.194580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.208185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f2510 00:18:03.759 [2024-12-06 04:21:16.209066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.209118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.223626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f20d8 00:18:03.759 [2024-12-06 04:21:16.224630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.224659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.240086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f1ca0 00:18:03.759 [2024-12-06 04:21:16.240946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.240986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.256097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f1868 00:18:03.759 [2024-12-06 04:21:16.256923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.256952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.272050] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f1430 00:18:03.759 [2024-12-06 04:21:16.272876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.272904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.288124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f0ff8 00:18:03.759 [2024-12-06 04:21:16.288923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.288951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.304133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f0bc0 00:18:03.759 [2024-12-06 04:21:16.304930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:03.759 [2024-12-06 04:21:16.304959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:03.759 [2024-12-06 04:21:16.320465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f0788 00:18:04.017 [2024-12-06 04:21:16.321318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.321368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.336669] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190f0350 00:18:04.017 [2024-12-06 04:21:16.337457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.337666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.351879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eff18 00:18:04.017 [2024-12-06 04:21:16.352872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.352901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.367227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190efae0 00:18:04.017 [2024-12-06 04:21:16.368155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.368192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.383856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ef6a8 00:18:04.017 [2024-12-06 04:21:16.384608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.384638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.399347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ef270 00:18:04.017 [2024-12-06 04:21:16.400080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.400116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.414455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eee38 00:18:04.017 [2024-12-06 04:21:16.415204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.415238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.429672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eea00 00:18:04.017 [2024-12-06 04:21:16.430358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.430418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.445066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ee5c8 00:18:04.017 [2024-12-06 04:21:16.445929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.445950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.460242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ee190 00:18:04.017 [2024-12-06 04:21:16.460949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.461107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.475366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190edd58 00:18:04.017 [2024-12-06 04:21:16.476188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.476222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.490480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ed920 00:18:04.017 [2024-12-06 04:21:16.491163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.491341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.505228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ed4e8 00:18:04.017 [2024-12-06 04:21:16.506103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.506294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.520968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ed0b0 00:18:04.017 [2024-12-06 04:21:16.521791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.522003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.536521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ecc78 00:18:04.017 [2024-12-06 04:21:16.537374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.537620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.551930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ec840 00:18:04.017 [2024-12-06 04:21:16.552750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.552960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:04.017 [2024-12-06 04:21:16.567057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ec408 00:18:04.017 [2024-12-06 04:21:16.567864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.017 [2024-12-06 04:21:16.568105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.582712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ebfd0 00:18:04.276 [2024-12-06 04:21:16.583471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.583684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.597814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ebb98 00:18:04.276 [2024-12-06 04:21:16.598604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.598634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.613068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eb760 00:18:04.276 [2024-12-06 04:21:16.613681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.613709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.628045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eb328 00:18:04.276 [2024-12-06 04:21:16.628622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.628650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.643308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eaef0 00:18:04.276 [2024-12-06 04:21:16.643921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.643955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.658489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190eaab8 00:18:04.276 [2024-12-06 04:21:16.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.659145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.673395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ea680 00:18:04.276 [2024-12-06 04:21:16.674003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.674037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.688501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190ea248 00:18:04.276 [2024-12-06 04:21:16.689062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.689096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.703785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e9e10 00:18:04.276 [2024-12-06 04:21:16.704330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.704357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.718532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e99d8 00:18:04.276 [2024-12-06 04:21:16.719254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.719285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.733545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e95a0 00:18:04.276 [2024-12-06 04:21:16.734190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.734231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.748649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e9168 00:18:04.276 [2024-12-06 04:21:16.749153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.749182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.763470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e8d30 00:18:04.276 [2024-12-06 04:21:16.764162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.764193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.778371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e88f8 00:18:04.276 [2024-12-06 04:21:16.778920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.779090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.793214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e84c0 00:18:04.276 [2024-12-06 04:21:16.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.793745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.808523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e8088 00:18:04.276 [2024-12-06 04:21:16.809018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.809045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:04.276 [2024-12-06 04:21:16.823728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e7c50 00:18:04.276 [2024-12-06 04:21:16.824175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.276 [2024-12-06 04:21:16.824202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:04.535 [2024-12-06 04:21:16.839153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e7818 00:18:04.535 [2024-12-06 04:21:16.839761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.535 [2024-12-06 04:21:16.839795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:04.535 [2024-12-06 04:21:16.855573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e73e0 00:18:04.535 [2024-12-06 04:21:16.856038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.535 [2024-12-06 04:21:16.856065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:04.535 [2024-12-06 04:21:16.871376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e6fa8 00:18:04.535 [2024-12-06 04:21:16.871821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.535 [2024-12-06 04:21:16.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.887142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e6b70 00:18:04.536 [2024-12-06 04:21:16.887562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.887589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.902084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e6738 00:18:04.536 [2024-12-06 04:21:16.902510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.902538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.917484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e6300 00:18:04.536 [2024-12-06 04:21:16.917909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.917953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.932349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e5ec8 00:18:04.536 [2024-12-06 04:21:16.932768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.947329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e5a90 00:18:04.536 [2024-12-06 04:21:16.947748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.947783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.962489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e5658 00:18:04.536 [2024-12-06 04:21:16.962880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.962911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.977359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e5220 00:18:04.536 [2024-12-06 04:21:16.977770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.977798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:16.992176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e4de8 00:18:04.536 [2024-12-06 04:21:16.992500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:16.992541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.006863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e49b0 00:18:04.536 [2024-12-06 04:21:17.007236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.007263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.022474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e4578 00:18:04.536 [2024-12-06 04:21:17.022828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.022856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.037849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e4140 00:18:04.536 [2024-12-06 04:21:17.038155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.038212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.052659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e3d08 00:18:04.536 [2024-12-06 04:21:17.052950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.052975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.068694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e38d0 00:18:04.536 [2024-12-06 04:21:17.068967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.069024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:04.536 [2024-12-06 04:21:17.084231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e3498 00:18:04.536 [2024-12-06 04:21:17.084536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.536 [2024-12-06 04:21:17.084579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.099955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e3060 00:18:04.796 [2024-12-06 04:21:17.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.100223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.115108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e2c28 00:18:04.796 [2024-12-06 04:21:17.115553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.115576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.130682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e27f0 00:18:04.796 [2024-12-06 04:21:17.130909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.130951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.145638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e23b8 00:18:04.796 [2024-12-06 04:21:17.145857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.145883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.160666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e1f80 00:18:04.796 [2024-12-06 04:21:17.161053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.161075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.176427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e1b48 00:18:04.796 [2024-12-06 04:21:17.176622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.176649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.191672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e1710 00:18:04.796 [2024-12-06 04:21:17.191885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.191911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.206995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e12d8 00:18:04.796 [2024-12-06 04:21:17.207335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.207358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.222819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e0ea0 00:18:04.796 [2024-12-06 04:21:17.223045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.223066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:04.796 [2024-12-06 04:21:17.238797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e0a68 00:18:04.796 [2024-12-06 04:21:17.238998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.796 [2024-12-06 04:21:17.239036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.253692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e0630 00:18:04.797 [2024-12-06 04:21:17.253838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.253858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.268368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190e01f8 00:18:04.797 [2024-12-06 04:21:17.268559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.268597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.283054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190dfdc0 00:18:04.797 [2024-12-06 04:21:17.283368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.283391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.297685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190df988 00:18:04.797 [2024-12-06 04:21:17.297965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.297986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.312318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190df550 00:18:04.797 [2024-12-06 04:21:17.312467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.312488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.327728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190df118 00:18:04.797 [2024-12-06 04:21:17.327833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.327856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:04.797 [2024-12-06 04:21:17.343860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190dece0 00:18:04.797 [2024-12-06 04:21:17.343986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:04.797 [2024-12-06 04:21:17.344022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:05.056 [2024-12-06 04:21:17.359726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190de8a8 00:18:05.056 [2024-12-06 04:21:17.359825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.056 [2024-12-06 04:21:17.359848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:05.056 [2024-12-06 04:21:17.374417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190de038 00:18:05.056 [2024-12-06 04:21:17.374503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.056 [2024-12-06 04:21:17.374524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:05.056 [2024-12-06 04:21:17.394527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190de038 00:18:05.056 [2024-12-06 04:21:17.395863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.056 [2024-12-06 04:21:17.395896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.056 [2024-12-06 04:21:17.409045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190de470 00:18:05.056 [2024-12-06 04:21:17.410623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:05.056 [2024-12-06 04:21:17.410655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:05.056 00:18:05.056 Latency(us) 00:18:05.056 [2024-12-06T04:21:17.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.056 [2024-12-06T04:21:17.621Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.056 nvme0n1 : 2.00 16540.82 64.61 0.00 0.00 7732.44 6613.18 21090.68 00:18:05.056 [2024-12-06T04:21:17.621Z] =================================================================================================================== 00:18:05.056 [2024-12-06T04:21:17.621Z] Total : 16540.82 64.61 0.00 0.00 7732.44 6613.18 21090.68 00:18:05.056 0 00:18:05.056 04:21:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.056 04:21:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.056 04:21:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.056 | .driver_specific 00:18:05.056 | .nvme_error 00:18:05.056 | .status_code 00:18:05.056 | .command_transient_transport_error' 00:18:05.056 04:21:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:05.315 04:21:17 -- host/digest.sh@71 -- # (( 129 > 0 )) 00:18:05.315 04:21:17 -- host/digest.sh@73 -- # killprocess 84438 00:18:05.315 04:21:17 -- common/autotest_common.sh@936 -- # '[' -z 84438 ']' 00:18:05.315 04:21:17 -- common/autotest_common.sh@940 -- # kill -0 84438 00:18:05.315 04:21:17 -- common/autotest_common.sh@941 -- # uname 00:18:05.315 04:21:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.315 04:21:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84438 00:18:05.315 killing process with pid 84438 00:18:05.315 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.315 00:18:05.315 Latency(us) 00:18:05.315 [2024-12-06T04:21:17.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.315 [2024-12-06T04:21:17.880Z] =================================================================================================================== 00:18:05.315 [2024-12-06T04:21:17.880Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.315 04:21:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.315 04:21:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.315 04:21:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84438' 00:18:05.315 04:21:17 -- common/autotest_common.sh@955 -- # kill 84438 00:18:05.315 04:21:17 -- common/autotest_common.sh@960 -- # wait 84438 00:18:05.574 04:21:17 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:18:05.574 04:21:17 -- host/digest.sh@54 -- # local rw bs qd 00:18:05.574 04:21:17 -- host/digest.sh@56 -- # rw=randwrite 00:18:05.574 04:21:17 -- host/digest.sh@56 -- # bs=131072 00:18:05.574 04:21:17 -- host/digest.sh@56 -- # qd=16 00:18:05.574 04:21:17 -- host/digest.sh@58 -- # bperfpid=84500 00:18:05.574 04:21:17 -- host/digest.sh@60 -- # waitforlisten 84500 /var/tmp/bperf.sock 00:18:05.574 04:21:17 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:05.574 04:21:17 -- common/autotest_common.sh@829 -- # '[' -z 84500 ']' 00:18:05.574 04:21:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:05.574 04:21:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.574 04:21:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:05.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:05.574 04:21:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.574 04:21:17 -- common/autotest_common.sh@10 -- # set +x 00:18:05.574 [2024-12-06 04:21:17.996967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:05.574 [2024-12-06 04:21:17.997278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84500 ] 00:18:05.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.574 Zero copy mechanism will not be used. 00:18:05.574 [2024-12-06 04:21:18.136471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.833 [2024-12-06 04:21:18.220846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.768 04:21:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.768 04:21:18 -- common/autotest_common.sh@862 -- # return 0 00:18:06.768 04:21:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.768 04:21:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.768 04:21:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:06.768 04:21:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.768 04:21:19 -- common/autotest_common.sh@10 -- # set +x 00:18:06.768 04:21:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.768 04:21:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.768 04:21:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.335 nvme0n1 00:18:07.335 04:21:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:07.335 04:21:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.335 04:21:19 -- common/autotest_common.sh@10 -- # set +x 00:18:07.335 04:21:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.335 04:21:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:07.335 04:21:19 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.336 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.336 Zero copy mechanism will not be used. 00:18:07.336 Running I/O for 2 seconds... 00:18:07.336 [2024-12-06 04:21:19.743226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.743582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.743620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.748334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.749765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.749789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.754474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.754859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.760007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.760318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.760345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.765263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.765743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.765781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.770575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.771073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.771276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.776198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.776663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.776825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.781632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.782104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.782258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.787096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.787548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.787777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.792750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.793197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.793372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.798272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.798798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.799019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.803880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.804364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.804548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.809463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.809925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.810075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.814770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.815093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.815127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.819912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.820209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.820235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.824857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.825139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.825197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.829877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.830159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.830185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.834926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.835252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.835278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.840201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.840506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.845828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.846161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.846188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.851196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.851500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.851581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.856552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.856853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.856881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.862072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.862356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.862395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.867597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.867927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.867957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.872978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.873473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.336 [2024-12-06 04:21:19.873497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.336 [2024-12-06 04:21:19.878598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.336 [2024-12-06 04:21:19.878906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.337 [2024-12-06 04:21:19.878935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.337 [2024-12-06 04:21:19.884132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.337 [2024-12-06 04:21:19.884417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.337 [2024-12-06 04:21:19.884453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.337 [2024-12-06 04:21:19.889705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.337 [2024-12-06 04:21:19.890016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.337 [2024-12-06 04:21:19.890075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.337 [2024-12-06 04:21:19.895183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.337 [2024-12-06 04:21:19.895555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.337 [2024-12-06 04:21:19.895583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.900745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.901051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.901111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.906300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.906627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.906657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.911904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.912348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.912372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.917615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.917921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.917951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.923114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.923426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.923478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.928631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.928943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.928972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.934218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.934581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.934609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.939806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.940169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.596 [2024-12-06 04:21:19.940196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.596 [2024-12-06 04:21:19.945382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.596 [2024-12-06 04:21:19.945816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.945851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.950748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.951095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.951121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.955830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.956113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.956140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.961006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.961303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.961331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.966023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.966294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.966352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.971136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.971613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.971650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.976436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.976738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.976765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.981671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.981960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.981989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.986834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.987139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.987170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.992131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.992414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.992450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:19.997332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:19.997710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:19.997758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.002830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.003154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.003181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.008007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.008290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.008316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.013426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.013785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.013844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.019194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.019721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.019745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.024607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.024887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.024912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.029522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.029821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.029847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.034439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.034804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.034832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.039626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.039902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.039928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.044542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.044845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.049448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.049742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.049768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.054475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.054801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.054830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.059534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.059810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.064348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.064665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.064692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.069276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.069564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.069590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.074165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.074472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.074497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.079188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.597 [2024-12-06 04:21:20.079666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.597 [2024-12-06 04:21:20.079702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.597 [2024-12-06 04:21:20.084267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.084578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.084604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.089139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.089446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.089472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.093993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.094284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.094311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.099194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.099694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.099731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.104923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.105208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.105234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.109882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.110177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.110203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.114863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.115191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.115217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.119828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.120130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.124758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.125034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.125060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.129748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.130029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.130056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.134737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.135046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.135073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.139777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.140052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.140077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.144693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.144970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.144995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.149565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.149862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.149888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.598 [2024-12-06 04:21:20.154459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.598 [2024-12-06 04:21:20.154958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.598 [2024-12-06 04:21:20.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.160081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.160377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.160412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.165460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.165764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.165791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.170651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.171021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.171047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.175959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.176221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.176246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.181025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.181298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.181324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.185970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.186387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.186424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.191230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.191524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.191550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.196149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.196439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.196465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.201159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.201485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.201512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.206170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.206669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.206693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.211726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.212090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.212117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.217293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.217616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.217643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.222677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.222996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.223053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.227953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.228245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.233021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.233300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.233327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.238081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.238541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.238589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.243240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.243575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.243606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.248191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.248466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.248492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.253205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.253529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.253560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.258275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.258779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.258816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.263465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.263779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.263805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.268637] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.268920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.268961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.273850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.859 [2024-12-06 04:21:20.274171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.859 [2024-12-06 04:21:20.274199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.859 [2024-12-06 04:21:20.279403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.279750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.279776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.284609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.284897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.284923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.289584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.289869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.289894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.294653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.295052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.299742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.300019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.300044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.304690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.304951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.304976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.309518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.309795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.309821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.314309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.314831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.314855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.319462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.319784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.319810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.324406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.324700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.324726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.329319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.329648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.329674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.334097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.334587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.334612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.339324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.339633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.339660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.344356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.344681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.344731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.349287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.349622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.349654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.354215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.354733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.354757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.359865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.360169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.360196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.365267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.365601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.365634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.370255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.370739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.370762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.375433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.375781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.375829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.380582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.380881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.380926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.385857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.386194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.386223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.391323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.391625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.391652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.396613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.396956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.396985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.402113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.402618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.402642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.407647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.407979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.408021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.412988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.860 [2024-12-06 04:21:20.413314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.860 [2024-12-06 04:21:20.413340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.860 [2024-12-06 04:21:20.418513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:07.861 [2024-12-06 04:21:20.418841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.861 [2024-12-06 04:21:20.418886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.423979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.424301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.424328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.429214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.429530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.429557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.434111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.434612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.434637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.439139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.439417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.439469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.443989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.444266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.444292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.449252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.449591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.449623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.454644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.454958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.455001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.460008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.460335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.460377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.465281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.465612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.465643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.470483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.470819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.470848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.475533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.475826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.475853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.480383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.480717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.480749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.485219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.121 [2024-12-06 04:21:20.485511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.121 [2024-12-06 04:21:20.485538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.121 [2024-12-06 04:21:20.490092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.490581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.490604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.495186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.495474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.495496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.499962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.500239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.500265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.504842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.505143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.509594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.509873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.509899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.514393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.514742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.514771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.519371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.524270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.524562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.524588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.529090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.529551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.529574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.534608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.534889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.534916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.539905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.540185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.540211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.544759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.545038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.545064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.549660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.549936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.554818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.555159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.555185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.560074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.560379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.560431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.565303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.565778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.570759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.571151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.576034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.576315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.576342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.581190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.581696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.581734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.586458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.586808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.586831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.591447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.591744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.591770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.596522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.596808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.596834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.601634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.601904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.601930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.606628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.606954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.606996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.611600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.611886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.611912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.616944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.617416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.617440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.122 [2024-12-06 04:21:20.622614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.122 [2024-12-06 04:21:20.623040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.122 [2024-12-06 04:21:20.623063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.627812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.628116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.628138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.632836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.633135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.633161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.637724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.638023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.638049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.642850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.643183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.643240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.647854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.648137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.648164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.653138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.653635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.653672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.658805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.659161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.659188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.664300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.664630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.664662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.669713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.670061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.670134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.675284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.675622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.675654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.123 [2024-12-06 04:21:20.680857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.123 [2024-12-06 04:21:20.681201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.123 [2024-12-06 04:21:20.681245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.383 [2024-12-06 04:21:20.686378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.383 [2024-12-06 04:21:20.686769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.383 [2024-12-06 04:21:20.686803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.383 [2024-12-06 04:21:20.692029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.383 [2024-12-06 04:21:20.692546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.383 [2024-12-06 04:21:20.692569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.383 [2024-12-06 04:21:20.697626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.383 [2024-12-06 04:21:20.697954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.383 [2024-12-06 04:21:20.697983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.383 [2024-12-06 04:21:20.703207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.383 [2024-12-06 04:21:20.703548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.383 [2024-12-06 04:21:20.703571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.383 [2024-12-06 04:21:20.708560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.708891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.708920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.714080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.714460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.714498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.719564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.719931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.725065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.725438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.730638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.730936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.730965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.736011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.736335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.736361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.741293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.741596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.741623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.746522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.746854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.746878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.751395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.751739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.751762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.756445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.756749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.756790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.761336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.761630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.761656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.766113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.766424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.771026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.771298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.771323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.775910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.776181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.776206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.780917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.781207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.785905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.786202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.791001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.791313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.791335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.796211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.796510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.796551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.801330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.801624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.801661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.806211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.806494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.806520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.811193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.811481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.811519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.816094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.816366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.816401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.820952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.821241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.825837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.826108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.826134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.830830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.831135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.831161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.384 [2024-12-06 04:21:20.835790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.384 [2024-12-06 04:21:20.836062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.384 [2024-12-06 04:21:20.836088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.840695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.840979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.841005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.845646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.845929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.845955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.850649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.850964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.851006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.855695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.855973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.856000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.860707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.861009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.861035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.865702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.865979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.866006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.870766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.871086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.871112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.875982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.876266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.876292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.881428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.881785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.881813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.886478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.886828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.886857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.891553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.891832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.891858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.896532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.896853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.896881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.901642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.901923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.901949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.906690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.907057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.907086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.911675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.911956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.911983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.916712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.917014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.917041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.921770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.922053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.922109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.927111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.927415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.932506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.932844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.932902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.938013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.938361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.385 [2024-12-06 04:21:20.943614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.385 [2024-12-06 04:21:20.943917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.385 [2024-12-06 04:21:20.943940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.948991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.949287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.949314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.954484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.954824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.954853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.959849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.960157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.960179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.965158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.965507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.965544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.970426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.970765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.970805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.975523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.975802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.975827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.980970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.981317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.981344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.986271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.986627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.986651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.991594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.991959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:20.996848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:20.997148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:20.997175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.001919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.002200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.002227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.007024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.007311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.007339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.012074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.012374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.012409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.017106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.017386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.017424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.022217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.022537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.022590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.027706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.028030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.028059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.033148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.033463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.033499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.038620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.038925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.043982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.044302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.044329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.049341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.049694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.049717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.054845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.055178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.060192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.060492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.060519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.065314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.065634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.065663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.070459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.070781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.646 [2024-12-06 04:21:21.070811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.646 [2024-12-06 04:21:21.075553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.646 [2024-12-06 04:21:21.075840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.075867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.080769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.081098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.085863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.086149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.086177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.091207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.091512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.091540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.096275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.096578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.096604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.101490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.101844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.101871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.106740] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.107074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.107132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.111981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.112255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.112282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.117045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.117333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.117362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.122166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.122465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.122493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.127569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.127840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.132784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.133122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.138375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.138744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.138772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.143842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.144158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.149209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.149580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.154367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.154702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.154731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.159683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.159971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.164876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.165218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.165244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.170158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.170472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.170499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.175176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.175468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.175494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.180258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.180549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.180576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.185312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.185625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.185652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.190343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.190700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.190728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.195440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.195735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.195761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.200524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.200824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.200851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.647 [2024-12-06 04:21:21.205947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.647 [2024-12-06 04:21:21.206269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.647 [2024-12-06 04:21:21.206296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.211257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.211576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.211602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.216501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.216817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.216844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.221638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.221923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.226636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.226979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.227021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.231792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.232052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.232108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.236712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.237005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.237032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.241658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.241930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.241956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.246630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.246923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.246950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.251679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.251960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.251988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.256777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.257062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.257090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.261811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.262099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.262127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.267134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.267457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.272357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.272685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.272713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.277737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.278074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.278134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.283049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.283344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.283371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.288108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.288389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.288425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.293034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.293313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.293340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.298034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.298314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.298341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.303146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.303426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.303461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.308141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.308454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.308481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.313364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.313684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.313711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.318528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.318853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.318896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.323676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.323955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.328761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.329042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.329069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.333958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.908 [2024-12-06 04:21:21.334254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.908 [2024-12-06 04:21:21.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.908 [2024-12-06 04:21:21.338987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.339271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.339329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.343954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.344232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.344258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.348892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.349197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.353807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.354085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.354111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.358845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.359168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.359195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.363872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.364153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.364180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.368923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.369201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.369227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.373820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.374100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.374127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.378793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.379101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.379143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.383816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.384079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.384104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.388785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.389066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.393887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.394173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.394215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.399173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.399465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.399500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.404531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.404858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.404886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.409972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.410305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.410332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.415505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.415857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.420884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.421238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.426367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.426722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.426764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.431791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.432114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.432172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.437155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.437473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.442146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.442441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.442467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.447185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.447463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.447498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.452247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.452560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.452586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.457246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.457536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.457562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.462208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.462498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.462524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.909 [2024-12-06 04:21:21.467477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:08.909 [2024-12-06 04:21:21.467807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.909 [2024-12-06 04:21:21.467834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.472838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.473162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.473188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.478063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.478341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.478368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.483264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.483557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.483584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.488329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.488646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.488689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.493412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.493705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.493731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.498356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.498705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.498734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.503491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.503848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.508872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.509158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.509185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.513956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.514233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.514259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.519068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.519353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.519380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.524469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.524810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.524839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.529976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.530317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.530345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.535247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.535591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.535615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.540399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.540746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.540785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.545550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.545834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.545861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.550601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.550921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.550979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.555676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.555955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.555982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.560851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.561140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.561182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.565860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.566140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.566166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.571297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.571609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.571636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.576629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.576928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.576955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.581692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.581969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.581995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.586823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.587187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.587214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.591963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.592272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.597019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.170 [2024-12-06 04:21:21.597316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.170 [2024-12-06 04:21:21.597342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.170 [2024-12-06 04:21:21.602058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.602365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.607145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.607432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.607470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.612198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.612493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.612520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.617307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.617608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.617634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.622332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.622670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.622698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.627406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.627699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.627725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.632380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.632691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.632718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.637512] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.637797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.637823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.642711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.643035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.643063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.647729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.647995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.648021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.653026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.653340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.658623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.658976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.659004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.663716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.663995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.664022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.668783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.669082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.669110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.673786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.674067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.674093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.678781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.679137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.679164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.683903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.684212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.688893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.689192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.689219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.693932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.694217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.694243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.699102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.699383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.699419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.704117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.704398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.704434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.709185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.709469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.709495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.714035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.714308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.714333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.719308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.719678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.719702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.724821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.725090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.725118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.171 [2024-12-06 04:21:21.730569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc6cd30) with pdu=0x2000190fef90 00:18:09.171 [2024-12-06 04:21:21.730894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.171 [2024-12-06 04:21:21.730921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.430 00:18:09.430 Latency(us) 00:18:09.430 [2024-12-06T04:21:21.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.430 [2024-12-06T04:21:21.995Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:09.430 nvme0n1 : 2.00 5951.58 743.95 0.00 0.00 2682.48 2055.45 10664.49 00:18:09.430 [2024-12-06T04:21:21.995Z] =================================================================================================================== 00:18:09.430 [2024-12-06T04:21:21.995Z] Total : 5951.58 743.95 0.00 0.00 2682.48 2055.45 10664.49 00:18:09.430 0 00:18:09.430 04:21:21 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.430 04:21:21 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.430 04:21:21 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.430 | .driver_specific 00:18:09.430 | .nvme_error 00:18:09.430 | .status_code 00:18:09.430 | .command_transient_transport_error' 00:18:09.430 04:21:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.690 04:21:22 -- host/digest.sh@71 -- # (( 384 > 0 )) 00:18:09.690 04:21:22 -- host/digest.sh@73 -- # killprocess 84500 00:18:09.690 04:21:22 -- common/autotest_common.sh@936 -- # '[' -z 84500 ']' 00:18:09.690 04:21:22 -- common/autotest_common.sh@940 -- # kill -0 84500 00:18:09.690 04:21:22 -- common/autotest_common.sh@941 -- # uname 00:18:09.690 04:21:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.690 04:21:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84500 00:18:09.690 04:21:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:09.690 killing process with pid 84500 00:18:09.690 04:21:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:09.690 04:21:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84500' 00:18:09.690 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.690 00:18:09.690 Latency(us) 00:18:09.690 [2024-12-06T04:21:22.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.690 [2024-12-06T04:21:22.255Z] =================================================================================================================== 00:18:09.690 [2024-12-06T04:21:22.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.690 04:21:22 -- common/autotest_common.sh@955 -- # kill 84500 00:18:09.690 04:21:22 -- common/autotest_common.sh@960 -- # wait 84500 00:18:09.950 04:21:22 -- host/digest.sh@115 -- # killprocess 84287 00:18:09.950 04:21:22 -- common/autotest_common.sh@936 -- # '[' -z 84287 ']' 00:18:09.950 04:21:22 -- common/autotest_common.sh@940 -- # kill -0 84287 00:18:09.950 04:21:22 -- common/autotest_common.sh@941 -- # uname 00:18:09.950 04:21:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.950 04:21:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84287 00:18:09.950 04:21:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:09.950 killing process with pid 84287 00:18:09.950 04:21:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:09.950 04:21:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84287' 00:18:09.950 04:21:22 -- common/autotest_common.sh@955 -- # kill 84287 00:18:09.950 04:21:22 -- common/autotest_common.sh@960 -- # wait 84287 00:18:09.950 00:18:09.950 real 0m18.682s 00:18:09.950 user 0m36.413s 00:18:09.950 sys 0m4.763s 00:18:09.950 04:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:09.950 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:18:09.950 ************************************ 00:18:09.950 END TEST nvmf_digest_error 00:18:09.950 ************************************ 00:18:10.209 04:21:22 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:18:10.209 04:21:22 -- host/digest.sh@139 -- # nvmftestfini 00:18:10.209 04:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:10.209 04:21:22 -- nvmf/common.sh@116 -- # sync 00:18:10.209 04:21:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:10.209 04:21:22 -- nvmf/common.sh@119 -- # set +e 00:18:10.209 04:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:10.209 04:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:10.209 rmmod nvme_tcp 00:18:10.209 rmmod nvme_fabrics 00:18:10.209 rmmod nvme_keyring 00:18:10.209 04:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:10.209 04:21:22 -- nvmf/common.sh@123 -- # set -e 00:18:10.209 04:21:22 -- nvmf/common.sh@124 -- # return 0 00:18:10.209 04:21:22 -- nvmf/common.sh@477 -- # '[' -n 84287 ']' 00:18:10.209 04:21:22 -- nvmf/common.sh@478 -- # killprocess 84287 00:18:10.209 04:21:22 -- common/autotest_common.sh@936 -- # '[' -z 84287 ']' 00:18:10.209 04:21:22 -- common/autotest_common.sh@940 -- # kill -0 84287 00:18:10.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84287) - No such process 00:18:10.209 04:21:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84287 is not found' 00:18:10.209 Process with pid 84287 is not found 00:18:10.209 04:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:10.209 04:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:10.209 04:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:10.209 04:21:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.209 04:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:10.209 04:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.209 04:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.209 04:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.209 04:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:10.209 00:18:10.209 real 0m37.542s 00:18:10.209 user 1m11.297s 00:18:10.209 sys 0m9.777s 00:18:10.209 04:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:10.209 ************************************ 00:18:10.209 END TEST nvmf_digest 00:18:10.209 ************************************ 00:18:10.209 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:18:10.209 04:21:22 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:10.209 04:21:22 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:10.209 04:21:22 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:10.209 04:21:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:10.209 04:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.209 04:21:22 -- common/autotest_common.sh@10 -- # set +x 00:18:10.209 ************************************ 00:18:10.209 START TEST nvmf_multipath 00:18:10.209 ************************************ 00:18:10.209 04:21:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:10.469 * Looking for test storage... 00:18:10.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:10.469 04:21:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:10.469 04:21:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:10.469 04:21:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:10.469 04:21:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:10.469 04:21:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:10.469 04:21:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:10.469 04:21:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:10.469 04:21:22 -- scripts/common.sh@335 -- # IFS=.-: 00:18:10.469 04:21:22 -- scripts/common.sh@335 -- # read -ra ver1 00:18:10.469 04:21:22 -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.469 04:21:22 -- scripts/common.sh@336 -- # read -ra ver2 00:18:10.469 04:21:22 -- scripts/common.sh@337 -- # local 'op=<' 00:18:10.469 04:21:22 -- scripts/common.sh@339 -- # ver1_l=2 00:18:10.469 04:21:22 -- scripts/common.sh@340 -- # ver2_l=1 00:18:10.469 04:21:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:10.469 04:21:22 -- scripts/common.sh@343 -- # case "$op" in 00:18:10.469 04:21:22 -- scripts/common.sh@344 -- # : 1 00:18:10.469 04:21:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:10.469 04:21:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.469 04:21:22 -- scripts/common.sh@364 -- # decimal 1 00:18:10.469 04:21:22 -- scripts/common.sh@352 -- # local d=1 00:18:10.469 04:21:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.469 04:21:22 -- scripts/common.sh@354 -- # echo 1 00:18:10.469 04:21:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:10.469 04:21:22 -- scripts/common.sh@365 -- # decimal 2 00:18:10.469 04:21:22 -- scripts/common.sh@352 -- # local d=2 00:18:10.469 04:21:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.469 04:21:22 -- scripts/common.sh@354 -- # echo 2 00:18:10.469 04:21:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:10.469 04:21:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:10.469 04:21:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:10.469 04:21:22 -- scripts/common.sh@367 -- # return 0 00:18:10.469 04:21:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.469 04:21:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:10.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.469 --rc genhtml_branch_coverage=1 00:18:10.469 --rc genhtml_function_coverage=1 00:18:10.469 --rc genhtml_legend=1 00:18:10.469 --rc geninfo_all_blocks=1 00:18:10.469 --rc geninfo_unexecuted_blocks=1 00:18:10.469 00:18:10.469 ' 00:18:10.469 04:21:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:10.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.469 --rc genhtml_branch_coverage=1 00:18:10.469 --rc genhtml_function_coverage=1 00:18:10.469 --rc genhtml_legend=1 00:18:10.469 --rc geninfo_all_blocks=1 00:18:10.469 --rc geninfo_unexecuted_blocks=1 00:18:10.469 00:18:10.469 ' 00:18:10.469 04:21:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:10.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.469 --rc genhtml_branch_coverage=1 00:18:10.469 --rc genhtml_function_coverage=1 00:18:10.469 --rc genhtml_legend=1 00:18:10.469 --rc geninfo_all_blocks=1 00:18:10.469 --rc geninfo_unexecuted_blocks=1 00:18:10.469 00:18:10.469 ' 00:18:10.469 04:21:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:10.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.469 --rc genhtml_branch_coverage=1 00:18:10.469 --rc genhtml_function_coverage=1 00:18:10.469 --rc genhtml_legend=1 00:18:10.469 --rc geninfo_all_blocks=1 00:18:10.469 --rc geninfo_unexecuted_blocks=1 00:18:10.469 00:18:10.469 ' 00:18:10.469 04:21:22 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.469 04:21:22 -- nvmf/common.sh@7 -- # uname -s 00:18:10.469 04:21:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.469 04:21:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.469 04:21:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.469 04:21:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.469 04:21:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.469 04:21:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.469 04:21:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.469 04:21:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.469 04:21:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.469 04:21:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.469 04:21:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:18:10.469 04:21:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:18:10.469 04:21:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.469 04:21:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.469 04:21:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.469 04:21:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.469 04:21:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.469 04:21:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.469 04:21:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.469 04:21:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.469 04:21:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.469 04:21:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.469 04:21:22 -- paths/export.sh@5 -- # export PATH 00:18:10.469 04:21:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.469 04:21:22 -- nvmf/common.sh@46 -- # : 0 00:18:10.469 04:21:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:10.469 04:21:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:10.469 04:21:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:10.469 04:21:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.469 04:21:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.469 04:21:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:10.469 04:21:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:10.469 04:21:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:10.469 04:21:22 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.469 04:21:22 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.469 04:21:22 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.469 04:21:22 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:10.470 04:21:22 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.470 04:21:22 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:10.470 04:21:22 -- host/multipath.sh@30 -- # nvmftestinit 00:18:10.470 04:21:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:10.470 04:21:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.470 04:21:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:10.470 04:21:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:10.470 04:21:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:10.470 04:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.470 04:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.470 04:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.470 04:21:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:10.470 04:21:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:10.470 04:21:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:10.470 04:21:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:10.470 04:21:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:10.470 04:21:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:10.470 04:21:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.470 04:21:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.470 04:21:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:10.470 04:21:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:10.470 04:21:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.470 04:21:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.470 04:21:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.470 04:21:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.470 04:21:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.470 04:21:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.470 04:21:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.470 04:21:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.470 04:21:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:10.470 04:21:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:10.470 Cannot find device "nvmf_tgt_br" 00:18:10.470 04:21:23 -- nvmf/common.sh@154 -- # true 00:18:10.470 04:21:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.470 Cannot find device "nvmf_tgt_br2" 00:18:10.470 04:21:23 -- nvmf/common.sh@155 -- # true 00:18:10.470 04:21:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:10.752 04:21:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:10.752 Cannot find device "nvmf_tgt_br" 00:18:10.752 04:21:23 -- nvmf/common.sh@157 -- # true 00:18:10.752 04:21:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:10.752 Cannot find device "nvmf_tgt_br2" 00:18:10.752 04:21:23 -- nvmf/common.sh@158 -- # true 00:18:10.752 04:21:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:10.752 04:21:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:10.752 04:21:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.752 04:21:23 -- nvmf/common.sh@161 -- # true 00:18:10.752 04:21:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.752 04:21:23 -- nvmf/common.sh@162 -- # true 00:18:10.752 04:21:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.752 04:21:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.752 04:21:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.752 04:21:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.752 04:21:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.752 04:21:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.752 04:21:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.752 04:21:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.752 04:21:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.752 04:21:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:10.752 04:21:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:10.752 04:21:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:10.752 04:21:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:10.752 04:21:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.752 04:21:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.752 04:21:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.752 04:21:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:10.753 04:21:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:10.753 04:21:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.753 04:21:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.753 04:21:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.753 04:21:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.753 04:21:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.753 04:21:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:10.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:10.753 00:18:10.753 --- 10.0.0.2 ping statistics --- 00:18:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.753 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:10.753 04:21:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:10.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:10.753 00:18:10.753 --- 10.0.0.3 ping statistics --- 00:18:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.753 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:10.753 04:21:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:10.753 00:18:10.753 --- 10.0.0.1 ping statistics --- 00:18:10.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.753 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:10.753 04:21:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.753 04:21:23 -- nvmf/common.sh@421 -- # return 0 00:18:10.753 04:21:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.753 04:21:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.753 04:21:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.753 04:21:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.753 04:21:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.753 04:21:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.753 04:21:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.025 04:21:23 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:11.025 04:21:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.025 04:21:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.025 04:21:23 -- common/autotest_common.sh@10 -- # set +x 00:18:11.025 04:21:23 -- nvmf/common.sh@469 -- # nvmfpid=84778 00:18:11.025 04:21:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:11.025 04:21:23 -- nvmf/common.sh@470 -- # waitforlisten 84778 00:18:11.025 04:21:23 -- common/autotest_common.sh@829 -- # '[' -z 84778 ']' 00:18:11.025 04:21:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.025 04:21:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.025 04:21:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.025 04:21:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.025 04:21:23 -- common/autotest_common.sh@10 -- # set +x 00:18:11.025 [2024-12-06 04:21:23.376604] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:11.025 [2024-12-06 04:21:23.376724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.025 [2024-12-06 04:21:23.517645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:11.295 [2024-12-06 04:21:23.598694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.295 [2024-12-06 04:21:23.598843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.295 [2024-12-06 04:21:23.598857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.295 [2024-12-06 04:21:23.598866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.295 [2024-12-06 04:21:23.599026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.295 [2024-12-06 04:21:23.599036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.862 04:21:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.862 04:21:24 -- common/autotest_common.sh@862 -- # return 0 00:18:11.862 04:21:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.862 04:21:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.862 04:21:24 -- common/autotest_common.sh@10 -- # set +x 00:18:12.121 04:21:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.121 04:21:24 -- host/multipath.sh@33 -- # nvmfapp_pid=84778 00:18:12.121 04:21:24 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:12.380 [2024-12-06 04:21:24.721633] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.380 04:21:24 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:12.640 Malloc0 00:18:12.640 04:21:25 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:12.898 04:21:25 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.155 04:21:25 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.414 [2024-12-06 04:21:25.773943] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.414 04:21:25 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:13.672 [2024-12-06 04:21:26.018097] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:13.672 04:21:26 -- host/multipath.sh@44 -- # bdevperf_pid=84829 00:18:13.672 04:21:26 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:13.672 04:21:26 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.672 04:21:26 -- host/multipath.sh@47 -- # waitforlisten 84829 /var/tmp/bdevperf.sock 00:18:13.672 04:21:26 -- common/autotest_common.sh@829 -- # '[' -z 84829 ']' 00:18:13.672 04:21:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.672 04:21:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.672 04:21:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.672 04:21:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.672 04:21:26 -- common/autotest_common.sh@10 -- # set +x 00:18:14.607 04:21:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.607 04:21:27 -- common/autotest_common.sh@862 -- # return 0 00:18:14.607 04:21:27 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:14.866 04:21:27 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:15.434 Nvme0n1 00:18:15.434 04:21:27 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:15.693 Nvme0n1 00:18:15.693 04:21:28 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:15.693 04:21:28 -- host/multipath.sh@78 -- # sleep 1 00:18:16.631 04:21:29 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:16.631 04:21:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:16.891 04:21:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:17.151 04:21:29 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:17.151 04:21:29 -- host/multipath.sh@65 -- # dtrace_pid=84879 00:18:17.151 04:21:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:17.151 04:21:29 -- host/multipath.sh@66 -- # sleep 6 00:18:23.718 04:21:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:23.718 04:21:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:23.718 04:21:35 -- host/multipath.sh@67 -- # active_port=4421 00:18:23.718 04:21:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.718 Attaching 4 probes... 00:18:23.718 @path[10.0.0.2, 4421]: 18908 00:18:23.718 @path[10.0.0.2, 4421]: 19432 00:18:23.718 @path[10.0.0.2, 4421]: 19332 00:18:23.718 @path[10.0.0.2, 4421]: 19487 00:18:23.718 @path[10.0.0.2, 4421]: 19318 00:18:23.718 04:21:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:23.718 04:21:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:23.718 04:21:35 -- host/multipath.sh@69 -- # sed -n 1p 00:18:23.718 04:21:35 -- host/multipath.sh@69 -- # port=4421 00:18:23.718 04:21:35 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.718 04:21:35 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.718 04:21:35 -- host/multipath.sh@72 -- # kill 84879 00:18:23.718 04:21:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.718 04:21:35 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:23.718 04:21:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:23.718 04:21:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:23.976 04:21:36 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:23.976 04:21:36 -- host/multipath.sh@65 -- # dtrace_pid=84998 00:18:23.976 04:21:36 -- host/multipath.sh@66 -- # sleep 6 00:18:23.976 04:21:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.538 04:21:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:30.538 04:21:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:30.538 04:21:42 -- host/multipath.sh@67 -- # active_port=4420 00:18:30.538 04:21:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.538 Attaching 4 probes... 00:18:30.538 @path[10.0.0.2, 4420]: 18954 00:18:30.538 @path[10.0.0.2, 4420]: 19907 00:18:30.538 @path[10.0.0.2, 4420]: 19337 00:18:30.538 @path[10.0.0.2, 4420]: 19287 00:18:30.538 @path[10.0.0.2, 4420]: 19261 00:18:30.538 04:21:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:30.538 04:21:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:30.538 04:21:42 -- host/multipath.sh@69 -- # sed -n 1p 00:18:30.538 04:21:42 -- host/multipath.sh@69 -- # port=4420 00:18:30.538 04:21:42 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:30.538 04:21:42 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:30.538 04:21:42 -- host/multipath.sh@72 -- # kill 84998 00:18:30.538 04:21:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.538 04:21:42 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:30.538 04:21:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:30.538 04:21:43 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:30.865 04:21:43 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:30.865 04:21:43 -- host/multipath.sh@65 -- # dtrace_pid=85110 00:18:30.865 04:21:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.865 04:21:43 -- host/multipath.sh@66 -- # sleep 6 00:18:37.431 04:21:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.431 04:21:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:37.431 04:21:49 -- host/multipath.sh@67 -- # active_port=4421 00:18:37.431 04:21:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.431 Attaching 4 probes... 00:18:37.431 @path[10.0.0.2, 4421]: 14734 00:18:37.432 @path[10.0.0.2, 4421]: 18606 00:18:37.432 @path[10.0.0.2, 4421]: 18720 00:18:37.432 @path[10.0.0.2, 4421]: 18812 00:18:37.432 @path[10.0.0.2, 4421]: 19432 00:18:37.432 04:21:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.432 04:21:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:37.432 04:21:49 -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.432 04:21:49 -- host/multipath.sh@69 -- # port=4421 00:18:37.432 04:21:49 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.432 04:21:49 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.432 04:21:49 -- host/multipath.sh@72 -- # kill 85110 00:18:37.432 04:21:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.432 04:21:49 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:37.432 04:21:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:37.432 04:21:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:37.691 04:21:50 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:37.691 04:21:50 -- host/multipath.sh@65 -- # dtrace_pid=85228 00:18:37.691 04:21:50 -- host/multipath.sh@66 -- # sleep 6 00:18:37.691 04:21:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:44.249 04:21:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:44.249 04:21:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:44.249 04:21:56 -- host/multipath.sh@67 -- # active_port= 00:18:44.249 04:21:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.249 Attaching 4 probes... 00:18:44.249 00:18:44.249 00:18:44.249 00:18:44.249 00:18:44.249 00:18:44.249 04:21:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:44.249 04:21:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:44.249 04:21:56 -- host/multipath.sh@69 -- # sed -n 1p 00:18:44.249 04:21:56 -- host/multipath.sh@69 -- # port= 00:18:44.249 04:21:56 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:44.249 04:21:56 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:44.249 04:21:56 -- host/multipath.sh@72 -- # kill 85228 00:18:44.249 04:21:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.249 04:21:56 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:44.249 04:21:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:44.249 04:21:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:44.507 04:21:56 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:44.508 04:21:56 -- host/multipath.sh@65 -- # dtrace_pid=85349 00:18:44.508 04:21:56 -- host/multipath.sh@66 -- # sleep 6 00:18:44.508 04:21:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.071 04:22:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:51.071 04:22:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:51.071 04:22:03 -- host/multipath.sh@67 -- # active_port=4421 00:18:51.071 04:22:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.071 Attaching 4 probes... 00:18:51.071 @path[10.0.0.2, 4421]: 19033 00:18:51.071 @path[10.0.0.2, 4421]: 18172 00:18:51.071 @path[10.0.0.2, 4421]: 17977 00:18:51.071 @path[10.0.0.2, 4421]: 18827 00:18:51.071 @path[10.0.0.2, 4421]: 19209 00:18:51.071 04:22:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:51.071 04:22:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:51.071 04:22:03 -- host/multipath.sh@69 -- # sed -n 1p 00:18:51.071 04:22:03 -- host/multipath.sh@69 -- # port=4421 00:18:51.071 04:22:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.071 04:22:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.071 04:22:03 -- host/multipath.sh@72 -- # kill 85349 00:18:51.071 04:22:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.071 04:22:03 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:51.071 [2024-12-06 04:22:03.451516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.071 [2024-12-06 04:22:03.451575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.071 [2024-12-06 04:22:03.451604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451946] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.451993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452055] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 [2024-12-06 04:22:03.452109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cad80 is same with the state(5) to be set 00:18:51.072 04:22:03 -- host/multipath.sh@101 -- # sleep 1 00:18:52.009 04:22:04 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:52.009 04:22:04 -- host/multipath.sh@65 -- # dtrace_pid=85467 00:18:52.009 04:22:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.009 04:22:04 -- host/multipath.sh@66 -- # sleep 6 00:18:58.568 04:22:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:58.568 04:22:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:58.568 04:22:10 -- host/multipath.sh@67 -- # active_port=4420 00:18:58.568 04:22:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.568 Attaching 4 probes... 00:18:58.568 @path[10.0.0.2, 4420]: 17988 00:18:58.568 @path[10.0.0.2, 4420]: 18245 00:18:58.568 @path[10.0.0.2, 4420]: 18351 00:18:58.568 @path[10.0.0.2, 4420]: 18571 00:18:58.568 @path[10.0.0.2, 4420]: 18473 00:18:58.568 04:22:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:58.568 04:22:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:58.568 04:22:10 -- host/multipath.sh@69 -- # sed -n 1p 00:18:58.568 04:22:10 -- host/multipath.sh@69 -- # port=4420 00:18:58.568 04:22:10 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:58.568 04:22:10 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:58.568 04:22:10 -- host/multipath.sh@72 -- # kill 85467 00:18:58.568 04:22:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:58.568 04:22:10 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:58.568 [2024-12-06 04:22:11.007880] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:58.568 04:22:11 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:58.825 04:22:11 -- host/multipath.sh@111 -- # sleep 6 00:19:05.389 04:22:17 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:05.389 04:22:17 -- host/multipath.sh@65 -- # dtrace_pid=85647 00:19:05.389 04:22:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84778 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:05.389 04:22:17 -- host/multipath.sh@66 -- # sleep 6 00:19:11.976 04:22:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:11.976 04:22:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:11.976 04:22:23 -- host/multipath.sh@67 -- # active_port=4421 00:19:11.976 04:22:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.976 Attaching 4 probes... 00:19:11.976 @path[10.0.0.2, 4421]: 19138 00:19:11.976 @path[10.0.0.2, 4421]: 18913 00:19:11.976 @path[10.0.0.2, 4421]: 18717 00:19:11.976 @path[10.0.0.2, 4421]: 18685 00:19:11.976 @path[10.0.0.2, 4421]: 18751 00:19:11.976 04:22:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:11.976 04:22:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:11.976 04:22:23 -- host/multipath.sh@69 -- # sed -n 1p 00:19:11.976 04:22:23 -- host/multipath.sh@69 -- # port=4421 00:19:11.976 04:22:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:11.976 04:22:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:11.976 04:22:23 -- host/multipath.sh@72 -- # kill 85647 00:19:11.976 04:22:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:11.976 04:22:23 -- host/multipath.sh@114 -- # killprocess 84829 00:19:11.976 04:22:23 -- common/autotest_common.sh@936 -- # '[' -z 84829 ']' 00:19:11.976 04:22:23 -- common/autotest_common.sh@940 -- # kill -0 84829 00:19:11.976 04:22:23 -- common/autotest_common.sh@941 -- # uname 00:19:11.976 04:22:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.976 04:22:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84829 00:19:11.976 04:22:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:11.976 04:22:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:11.976 killing process with pid 84829 00:19:11.976 04:22:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84829' 00:19:11.976 04:22:23 -- common/autotest_common.sh@955 -- # kill 84829 00:19:11.976 04:22:23 -- common/autotest_common.sh@960 -- # wait 84829 00:19:11.976 Connection closed with partial response: 00:19:11.976 00:19:11.976 00:19:11.976 04:22:23 -- host/multipath.sh@116 -- # wait 84829 00:19:11.976 04:22:23 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:11.976 [2024-12-06 04:21:26.088139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:11.976 [2024-12-06 04:21:26.088257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84829 ] 00:19:11.976 [2024-12-06 04:21:26.231078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.976 [2024-12-06 04:21:26.324400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.976 Running I/O for 90 seconds... 00:19:11.976 [2024-12-06 04:21:36.453187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.976 [2024-12-06 04:21:36.453262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.976 [2024-12-06 04:21:36.453355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.976 [2024-12-06 04:21:36.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.976 [2024-12-06 04:21:36.453451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.976 [2024-12-06 04:21:36.453486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.976 [2024-12-06 04:21:36.453522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.976 [2024-12-06 04:21:36.453557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.976 [2024-12-06 04:21:36.453577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.976 [2024-12-06 04:21:36.453591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.453865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.453937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.453972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.453992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.454939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.454977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.454999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.455030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.455051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.977 [2024-12-06 04:21:36.455066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.455088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.977 [2024-12-06 04:21:36.455103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:11.977 [2024-12-06 04:21:36.455124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.455944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.455966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.455981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.978 [2024-12-06 04:21:36.456546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:11.978 [2024-12-06 04:21:36.456620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.978 [2024-12-06 04:21:36.456636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.456965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.456995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.457047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.457164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.457204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.457277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.457530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.457553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.979 [2024-12-06 04:21:36.459742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.979 [2024-12-06 04:21:36.459765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.979 [2024-12-06 04:21:36.459781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:36.459802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:36.459817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:36.459838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:36.459853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:36.459875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:36.459891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:36.459913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:36.459928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.025620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.025695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.025731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.025828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.025969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.025990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.980 [2024-12-06 04:21:43.026780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.980 [2024-12-06 04:21:43.026906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-06 04:21:43.026922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.026944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.026959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.026995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.027926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.027982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.027996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.028031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.028067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.028102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.028136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.028172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.028212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.981 [2024-12-06 04:21:43.028247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:11.981 [2024-12-06 04:21:43.028276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-06 04:21:43.028292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.982 [2024-12-06 04:21:43.028452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.982 [2024-12-06 04:21:43.028487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.982 [2024-12-06 04:21:43.028700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.028978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.028993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.982 [2024-12-06 04:21:43.029215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.029676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.029692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.030588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.030618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.982 [2024-12-06 04:21:43.030655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.982 [2024-12-06 04:21:43.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.030719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.030764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.030810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.030856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.030902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.030947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.030977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.031167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.031213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.031302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.031368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:43.031521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:43.031550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:43.031566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.047772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.047844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.047928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.047950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.047974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.047990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.048047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.048087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.048222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.048702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.983 [2024-12-06 04:21:50.048754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:11.983 [2024-12-06 04:21:50.048775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.983 [2024-12-06 04:21:50.048790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.048812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.048827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.048848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.048863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.048885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.048900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.048922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.048953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.049905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.049974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.049990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.984 [2024-12-06 04:21:50.050684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.984 [2024-12-06 04:21:50.050747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.984 [2024-12-06 04:21:50.050771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.050787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.050810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.050826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.050848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.050864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.050916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.050938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.050975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.051917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.051973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.051995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.052010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.052031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.052046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.052068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.985 [2024-12-06 04:21:50.052084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:11.985 [2024-12-06 04:21:50.052106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.985 [2024-12-06 04:21:50.052121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.052375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.053565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.053655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.053745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.053865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.053910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.053967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.053998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.054015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.054045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.054061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.054090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.054106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.054135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.986 [2024-12-06 04:21:50.054150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:21:50.054180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:21:50.054196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.986 [2024-12-06 04:22:03.452662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.986 [2024-12-06 04:22:03.452675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.452987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.987 [2024-12-06 04:22:03.453807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.987 [2024-12-06 04:22:03.453911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.987 [2024-12-06 04:22:03.453926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.453941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.453955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.453970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.453983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.453999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.454948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.454993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.455007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.455037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.455067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.455111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.988 [2024-12-06 04:22:03.455139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.455168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.988 [2024-12-06 04:22:03.455183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.988 [2024-12-06 04:22:03.455196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.455946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.455974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.455988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.456002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.989 [2024-12-06 04:22:03.456096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:11.989 [2024-12-06 04:22:03.456244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.989 [2024-12-06 04:22:03.456259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1574810 is same with the state(5) to be set 00:19:11.989 [2024-12-06 04:22:03.456280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:11.989 [2024-12-06 04:22:03.456292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:11.989 [2024-12-06 04:22:03.456302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21616 len:8 PRP1 0x0 PRP2 0x0 00:19:11.989 [2024-12-06 04:22:03.456315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.990 [2024-12-06 04:22:03.456375] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1574810 was disconnected and freed. reset controller. 00:19:11.990 [2024-12-06 04:22:03.457513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.990 [2024-12-06 04:22:03.457600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdf30 (9): Bad file descriptor 00:19:11.990 [2024-12-06 04:22:03.457902] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.990 [2024-12-06 04:22:03.457981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.990 [2024-12-06 04:22:03.458032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.990 [2024-12-06 04:22:03.458054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14fdf30 with addr=10.0.0.2, port=4421 00:19:11.990 [2024-12-06 04:22:03.458070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14fdf30 is same with the state(5) to be set 00:19:11.990 [2024-12-06 04:22:03.458104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fdf30 (9): Bad file descriptor 00:19:11.990 [2024-12-06 04:22:03.458149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.990 [2024-12-06 04:22:03.458168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:11.990 [2024-12-06 04:22:03.458183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.990 [2024-12-06 04:22:03.458213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.990 [2024-12-06 04:22:03.458231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.990 [2024-12-06 04:22:13.510580] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:11.990 Received shutdown signal, test time was about 55.378531 seconds 00:19:11.990 00:19:11.990 Latency(us) 00:19:11.990 [2024-12-06T04:22:24.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.990 [2024-12-06T04:22:24.555Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.990 Verification LBA range: start 0x0 length 0x4000 00:19:11.990 Nvme0n1 : 55.38 10805.23 42.21 0.00 0.00 11826.79 262.52 7015926.69 00:19:11.990 [2024-12-06T04:22:24.555Z] =================================================================================================================== 00:19:11.990 [2024-12-06T04:22:24.555Z] Total : 10805.23 42.21 0.00 0.00 11826.79 262.52 7015926.69 00:19:11.990 04:22:23 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.990 04:22:24 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:11.990 04:22:24 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:11.990 04:22:24 -- host/multipath.sh@125 -- # nvmftestfini 00:19:11.990 04:22:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:11.990 04:22:24 -- nvmf/common.sh@116 -- # sync 00:19:11.990 04:22:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:11.990 04:22:24 -- nvmf/common.sh@119 -- # set +e 00:19:11.990 04:22:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:11.990 04:22:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:11.990 rmmod nvme_tcp 00:19:11.990 rmmod nvme_fabrics 00:19:11.990 rmmod nvme_keyring 00:19:11.990 04:22:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:11.990 04:22:24 -- nvmf/common.sh@123 -- # set -e 00:19:11.990 04:22:24 -- nvmf/common.sh@124 -- # return 0 00:19:11.990 04:22:24 -- nvmf/common.sh@477 -- # '[' -n 84778 ']' 00:19:11.990 04:22:24 -- nvmf/common.sh@478 -- # killprocess 84778 00:19:11.990 04:22:24 -- common/autotest_common.sh@936 -- # '[' -z 84778 ']' 00:19:11.990 04:22:24 -- common/autotest_common.sh@940 -- # kill -0 84778 00:19:11.990 04:22:24 -- common/autotest_common.sh@941 -- # uname 00:19:11.990 04:22:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.990 04:22:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84778 00:19:11.990 04:22:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:11.990 04:22:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:11.990 killing process with pid 84778 00:19:11.990 04:22:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84778' 00:19:11.990 04:22:24 -- common/autotest_common.sh@955 -- # kill 84778 00:19:11.990 04:22:24 -- common/autotest_common.sh@960 -- # wait 84778 00:19:11.990 04:22:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:11.990 04:22:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:11.990 04:22:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:11.990 04:22:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.990 04:22:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:11.990 04:22:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.990 04:22:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.990 04:22:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.990 04:22:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:11.990 00:19:11.990 real 1m1.710s 00:19:11.990 user 2m50.628s 00:19:11.990 sys 0m18.764s 00:19:11.990 04:22:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:11.990 04:22:24 -- common/autotest_common.sh@10 -- # set +x 00:19:11.990 ************************************ 00:19:11.990 END TEST nvmf_multipath 00:19:11.990 ************************************ 00:19:11.990 04:22:24 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:11.990 04:22:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:11.990 04:22:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.990 04:22:24 -- common/autotest_common.sh@10 -- # set +x 00:19:11.990 ************************************ 00:19:11.990 START TEST nvmf_timeout 00:19:11.990 ************************************ 00:19:11.990 04:22:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:12.250 * Looking for test storage... 00:19:12.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:12.250 04:22:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:12.250 04:22:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:12.250 04:22:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:12.250 04:22:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:12.250 04:22:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:12.250 04:22:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:12.250 04:22:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:12.250 04:22:24 -- scripts/common.sh@335 -- # IFS=.-: 00:19:12.250 04:22:24 -- scripts/common.sh@335 -- # read -ra ver1 00:19:12.250 04:22:24 -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.250 04:22:24 -- scripts/common.sh@336 -- # read -ra ver2 00:19:12.250 04:22:24 -- scripts/common.sh@337 -- # local 'op=<' 00:19:12.250 04:22:24 -- scripts/common.sh@339 -- # ver1_l=2 00:19:12.250 04:22:24 -- scripts/common.sh@340 -- # ver2_l=1 00:19:12.250 04:22:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:12.250 04:22:24 -- scripts/common.sh@343 -- # case "$op" in 00:19:12.250 04:22:24 -- scripts/common.sh@344 -- # : 1 00:19:12.250 04:22:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:12.250 04:22:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.250 04:22:24 -- scripts/common.sh@364 -- # decimal 1 00:19:12.250 04:22:24 -- scripts/common.sh@352 -- # local d=1 00:19:12.250 04:22:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.250 04:22:24 -- scripts/common.sh@354 -- # echo 1 00:19:12.250 04:22:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:12.250 04:22:24 -- scripts/common.sh@365 -- # decimal 2 00:19:12.250 04:22:24 -- scripts/common.sh@352 -- # local d=2 00:19:12.250 04:22:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.250 04:22:24 -- scripts/common.sh@354 -- # echo 2 00:19:12.250 04:22:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:12.250 04:22:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:12.250 04:22:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:12.250 04:22:24 -- scripts/common.sh@367 -- # return 0 00:19:12.250 04:22:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.250 04:22:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.250 --rc genhtml_branch_coverage=1 00:19:12.250 --rc genhtml_function_coverage=1 00:19:12.250 --rc genhtml_legend=1 00:19:12.250 --rc geninfo_all_blocks=1 00:19:12.250 --rc geninfo_unexecuted_blocks=1 00:19:12.250 00:19:12.250 ' 00:19:12.250 04:22:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.250 --rc genhtml_branch_coverage=1 00:19:12.250 --rc genhtml_function_coverage=1 00:19:12.250 --rc genhtml_legend=1 00:19:12.250 --rc geninfo_all_blocks=1 00:19:12.250 --rc geninfo_unexecuted_blocks=1 00:19:12.250 00:19:12.250 ' 00:19:12.250 04:22:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.250 --rc genhtml_branch_coverage=1 00:19:12.250 --rc genhtml_function_coverage=1 00:19:12.250 --rc genhtml_legend=1 00:19:12.250 --rc geninfo_all_blocks=1 00:19:12.250 --rc geninfo_unexecuted_blocks=1 00:19:12.250 00:19:12.250 ' 00:19:12.250 04:22:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.250 --rc genhtml_branch_coverage=1 00:19:12.250 --rc genhtml_function_coverage=1 00:19:12.250 --rc genhtml_legend=1 00:19:12.250 --rc geninfo_all_blocks=1 00:19:12.250 --rc geninfo_unexecuted_blocks=1 00:19:12.250 00:19:12.250 ' 00:19:12.250 04:22:24 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:12.250 04:22:24 -- nvmf/common.sh@7 -- # uname -s 00:19:12.250 04:22:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.250 04:22:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.250 04:22:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.250 04:22:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.250 04:22:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.250 04:22:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.250 04:22:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.250 04:22:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.250 04:22:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.250 04:22:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.250 04:22:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:19:12.250 04:22:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:19:12.250 04:22:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.250 04:22:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.250 04:22:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:12.250 04:22:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:12.250 04:22:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.250 04:22:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.250 04:22:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.250 04:22:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.250 04:22:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.250 04:22:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.250 04:22:24 -- paths/export.sh@5 -- # export PATH 00:19:12.250 04:22:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.250 04:22:24 -- nvmf/common.sh@46 -- # : 0 00:19:12.250 04:22:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:12.250 04:22:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:12.250 04:22:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:12.250 04:22:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.250 04:22:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.250 04:22:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:12.250 04:22:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:12.250 04:22:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:12.250 04:22:24 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.251 04:22:24 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.251 04:22:24 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:12.251 04:22:24 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:12.251 04:22:24 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.251 04:22:24 -- host/timeout.sh@19 -- # nvmftestinit 00:19:12.251 04:22:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:12.251 04:22:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.251 04:22:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:12.251 04:22:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:12.251 04:22:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:12.251 04:22:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.251 04:22:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.251 04:22:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.251 04:22:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:12.251 04:22:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:12.251 04:22:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:12.251 04:22:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:12.251 04:22:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:12.251 04:22:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:12.251 04:22:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.251 04:22:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.251 04:22:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:12.251 04:22:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:12.251 04:22:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:12.251 04:22:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:12.251 04:22:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:12.251 04:22:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.251 04:22:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:12.251 04:22:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:12.251 04:22:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:12.251 04:22:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:12.251 04:22:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:12.251 04:22:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:12.251 Cannot find device "nvmf_tgt_br" 00:19:12.251 04:22:24 -- nvmf/common.sh@154 -- # true 00:19:12.251 04:22:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:12.251 Cannot find device "nvmf_tgt_br2" 00:19:12.251 04:22:24 -- nvmf/common.sh@155 -- # true 00:19:12.251 04:22:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:12.251 04:22:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:12.251 Cannot find device "nvmf_tgt_br" 00:19:12.251 04:22:24 -- nvmf/common.sh@157 -- # true 00:19:12.251 04:22:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:12.251 Cannot find device "nvmf_tgt_br2" 00:19:12.251 04:22:24 -- nvmf/common.sh@158 -- # true 00:19:12.251 04:22:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:12.511 04:22:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:12.511 04:22:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:12.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.511 04:22:24 -- nvmf/common.sh@161 -- # true 00:19:12.511 04:22:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:12.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:12.511 04:22:24 -- nvmf/common.sh@162 -- # true 00:19:12.511 04:22:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:12.511 04:22:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:12.511 04:22:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:12.511 04:22:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:12.511 04:22:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:12.511 04:22:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:12.511 04:22:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:12.511 04:22:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:12.511 04:22:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:12.511 04:22:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:12.511 04:22:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:12.511 04:22:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:12.511 04:22:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:12.511 04:22:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:12.511 04:22:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:12.511 04:22:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:12.511 04:22:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:12.511 04:22:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:12.511 04:22:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:12.511 04:22:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:12.511 04:22:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:12.511 04:22:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:12.511 04:22:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:12.511 04:22:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:12.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:12.511 00:19:12.511 --- 10.0.0.2 ping statistics --- 00:19:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.511 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:12.511 04:22:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:12.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:12.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:19:12.511 00:19:12.511 --- 10.0.0.3 ping statistics --- 00:19:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.511 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:12.511 04:22:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:12.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:12.511 00:19:12.511 --- 10.0.0.1 ping statistics --- 00:19:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.511 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:12.511 04:22:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.511 04:22:25 -- nvmf/common.sh@421 -- # return 0 00:19:12.511 04:22:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:12.511 04:22:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.511 04:22:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:12.511 04:22:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:12.511 04:22:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.511 04:22:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:12.511 04:22:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:12.511 04:22:25 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:12.511 04:22:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:12.511 04:22:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.511 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:19:12.511 04:22:25 -- nvmf/common.sh@469 -- # nvmfpid=85960 00:19:12.511 04:22:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:12.511 04:22:25 -- nvmf/common.sh@470 -- # waitforlisten 85960 00:19:12.511 04:22:25 -- common/autotest_common.sh@829 -- # '[' -z 85960 ']' 00:19:12.511 04:22:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.511 04:22:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.511 04:22:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.511 04:22:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.511 04:22:25 -- common/autotest_common.sh@10 -- # set +x 00:19:12.771 [2024-12-06 04:22:25.096842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:12.771 [2024-12-06 04:22:25.096950] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.771 [2024-12-06 04:22:25.232850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:12.771 [2024-12-06 04:22:25.307039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.771 [2024-12-06 04:22:25.307183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.771 [2024-12-06 04:22:25.307195] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.771 [2024-12-06 04:22:25.307203] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.771 [2024-12-06 04:22:25.307700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.771 [2024-12-06 04:22:25.307715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.707 04:22:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.707 04:22:26 -- common/autotest_common.sh@862 -- # return 0 00:19:13.707 04:22:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.707 04:22:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.707 04:22:26 -- common/autotest_common.sh@10 -- # set +x 00:19:13.707 04:22:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.707 04:22:26 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.707 04:22:26 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:13.966 [2024-12-06 04:22:26.358221] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.966 04:22:26 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:14.225 Malloc0 00:19:14.225 04:22:26 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.484 04:22:26 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.755 04:22:27 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.014 [2024-12-06 04:22:27.416340] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.014 04:22:27 -- host/timeout.sh@32 -- # bdevperf_pid=86015 00:19:15.014 04:22:27 -- host/timeout.sh@34 -- # waitforlisten 86015 /var/tmp/bdevperf.sock 00:19:15.014 04:22:27 -- common/autotest_common.sh@829 -- # '[' -z 86015 ']' 00:19:15.014 04:22:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.014 04:22:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.014 04:22:27 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:15.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.014 04:22:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.014 04:22:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.014 04:22:27 -- common/autotest_common.sh@10 -- # set +x 00:19:15.014 [2024-12-06 04:22:27.477160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:15.014 [2024-12-06 04:22:27.477248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86015 ] 00:19:15.273 [2024-12-06 04:22:27.610157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.273 [2024-12-06 04:22:27.691512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.211 04:22:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.211 04:22:28 -- common/autotest_common.sh@862 -- # return 0 00:19:16.211 04:22:28 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:16.211 04:22:28 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:16.780 NVMe0n1 00:19:16.780 04:22:29 -- host/timeout.sh@51 -- # rpc_pid=86033 00:19:16.780 04:22:29 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:16.780 04:22:29 -- host/timeout.sh@53 -- # sleep 1 00:19:16.780 Running I/O for 10 seconds... 00:19:17.794 04:22:30 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.794 [2024-12-06 04:22:30.311941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f9c0 is same with the state(5) to be set 00:19:17.794 [2024-12-06 04:22:30.312145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.794 [2024-12-06 04:22:30.312314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.794 [2024-12-06 04:22:30.312324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.312893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.312985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.312996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.795 [2024-12-06 04:22:30.313004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.795 [2024-12-06 04:22:30.313139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.795 [2024-12-06 04:22:30.313150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.313962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.313982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.313993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.796 [2024-12-06 04:22:30.314002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.796 [2024-12-06 04:22:30.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.796 [2024-12-06 04:22:30.314029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:17.797 [2024-12-06 04:22:30.314798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.797 [2024-12-06 04:22:30.314955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:17.797 [2024-12-06 04:22:30.314969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.314979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84ecf0 is same with the state(5) to be set 00:19:17.798 [2024-12-06 04:22:30.314992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:17.798 [2024-12-06 04:22:30.315000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:17.798 [2024-12-06 04:22:30.315008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128912 len:8 PRP1 0x0 PRP2 0x0 00:19:17.798 [2024-12-06 04:22:30.315017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.315084] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x84ecf0 was disconnected and freed. reset controller. 00:19:17.798 [2024-12-06 04:22:30.315177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.798 [2024-12-06 04:22:30.315194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.315205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.798 [2024-12-06 04:22:30.315214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.315224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.798 [2024-12-06 04:22:30.315233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.315243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:17.798 [2024-12-06 04:22:30.315252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:17.798 [2024-12-06 04:22:30.315261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec20 is same with the state(5) to be set 00:19:17.798 [2024-12-06 04:22:30.315478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.798 [2024-12-06 04:22:30.315502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fec20 (9): Bad file descriptor 00:19:17.798 [2024-12-06 04:22:30.315621] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.798 [2024-12-06 04:22:30.315685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.798 [2024-12-06 04:22:30.315744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.798 [2024-12-06 04:22:30.315761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec20 with addr=10.0.0.2, port=4420 00:19:17.798 [2024-12-06 04:22:30.315772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec20 is same with the state(5) to be set 00:19:17.798 [2024-12-06 04:22:30.315792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fec20 (9): Bad file descriptor 00:19:17.798 [2024-12-06 04:22:30.315808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.798 [2024-12-06 04:22:30.315818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.798 [2024-12-06 04:22:30.315828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.798 [2024-12-06 04:22:30.315848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.798 [2024-12-06 04:22:30.315859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.798 04:22:30 -- host/timeout.sh@56 -- # sleep 2 00:19:20.332 [2024-12-06 04:22:32.316003] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.332 [2024-12-06 04:22:32.316105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.332 [2024-12-06 04:22:32.316148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:20.332 [2024-12-06 04:22:32.316164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec20 with addr=10.0.0.2, port=4420 00:19:20.332 [2024-12-06 04:22:32.316177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec20 is same with the state(5) to be set 00:19:20.332 [2024-12-06 04:22:32.316201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fec20 (9): Bad file descriptor 00:19:20.332 [2024-12-06 04:22:32.316219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.332 [2024-12-06 04:22:32.316245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:20.332 [2024-12-06 04:22:32.316271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.332 [2024-12-06 04:22:32.316313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:20.332 [2024-12-06 04:22:32.316325] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.332 04:22:32 -- host/timeout.sh@57 -- # get_controller 00:19:20.332 04:22:32 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:20.332 04:22:32 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:20.332 04:22:32 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:20.332 04:22:32 -- host/timeout.sh@58 -- # get_bdev 00:19:20.332 04:22:32 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:20.332 04:22:32 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:20.332 04:22:32 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:20.332 04:22:32 -- host/timeout.sh@61 -- # sleep 5 00:19:22.237 [2024-12-06 04:22:34.316467] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.237 [2024-12-06 04:22:34.316584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.237 [2024-12-06 04:22:34.316629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.237 [2024-12-06 04:22:34.316646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec20 with addr=10.0.0.2, port=4420 00:19:22.237 [2024-12-06 04:22:34.316660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec20 is same with the state(5) to be set 00:19:22.237 [2024-12-06 04:22:34.316697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fec20 (9): Bad file descriptor 00:19:22.237 [2024-12-06 04:22:34.316727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.237 [2024-12-06 04:22:34.316738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:22.237 [2024-12-06 04:22:34.316749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.237 [2024-12-06 04:22:34.316775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.237 [2024-12-06 04:22:34.316786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.157 [2024-12-06 04:22:36.316837] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.157 [2024-12-06 04:22:36.316925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.157 [2024-12-06 04:22:36.316937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.157 [2024-12-06 04:22:36.316948] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:24.157 [2024-12-06 04:22:36.316975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.089 00:19:25.089 Latency(us) 00:19:25.089 [2024-12-06T04:22:37.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.089 [2024-12-06T04:22:37.654Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.089 Verification LBA range: start 0x0 length 0x4000 00:19:25.089 NVMe0n1 : 8.14 1974.31 7.71 15.73 0.00 64236.12 3157.64 7015926.69 00:19:25.089 [2024-12-06T04:22:37.654Z] =================================================================================================================== 00:19:25.089 [2024-12-06T04:22:37.654Z] Total : 1974.31 7.71 15.73 0.00 64236.12 3157.64 7015926.69 00:19:25.089 0 00:19:25.348 04:22:37 -- host/timeout.sh@62 -- # get_controller 00:19:25.348 04:22:37 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:25.348 04:22:37 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:25.608 04:22:38 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:25.608 04:22:38 -- host/timeout.sh@63 -- # get_bdev 00:19:25.608 04:22:38 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:25.608 04:22:38 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:25.867 04:22:38 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:25.867 04:22:38 -- host/timeout.sh@65 -- # wait 86033 00:19:25.867 04:22:38 -- host/timeout.sh@67 -- # killprocess 86015 00:19:25.867 04:22:38 -- common/autotest_common.sh@936 -- # '[' -z 86015 ']' 00:19:25.867 04:22:38 -- common/autotest_common.sh@940 -- # kill -0 86015 00:19:25.867 04:22:38 -- common/autotest_common.sh@941 -- # uname 00:19:25.867 04:22:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.867 04:22:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86015 00:19:25.867 04:22:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:25.867 04:22:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:25.867 killing process with pid 86015 00:19:25.867 04:22:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86015' 00:19:25.867 Received shutdown signal, test time was about 9.230534 seconds 00:19:25.867 00:19:25.867 Latency(us) 00:19:25.867 [2024-12-06T04:22:38.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.867 [2024-12-06T04:22:38.432Z] =================================================================================================================== 00:19:25.867 [2024-12-06T04:22:38.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.867 04:22:38 -- common/autotest_common.sh@955 -- # kill 86015 00:19:25.867 04:22:38 -- common/autotest_common.sh@960 -- # wait 86015 00:19:26.127 04:22:38 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.386 [2024-12-06 04:22:38.877032] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.386 04:22:38 -- host/timeout.sh@74 -- # bdevperf_pid=86160 00:19:26.386 04:22:38 -- host/timeout.sh@76 -- # waitforlisten 86160 /var/tmp/bdevperf.sock 00:19:26.386 04:22:38 -- common/autotest_common.sh@829 -- # '[' -z 86160 ']' 00:19:26.386 04:22:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.386 04:22:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.386 04:22:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.386 04:22:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.386 04:22:38 -- common/autotest_common.sh@10 -- # set +x 00:19:26.386 04:22:38 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:26.645 [2024-12-06 04:22:38.953891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:26.645 [2024-12-06 04:22:38.953983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86160 ] 00:19:26.645 [2024-12-06 04:22:39.089950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.645 [2024-12-06 04:22:39.183067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.581 04:22:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.581 04:22:39 -- common/autotest_common.sh@862 -- # return 0 00:19:27.581 04:22:39 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:27.840 04:22:40 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:28.099 NVMe0n1 00:19:28.099 04:22:40 -- host/timeout.sh@84 -- # rpc_pid=86179 00:19:28.099 04:22:40 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.099 04:22:40 -- host/timeout.sh@86 -- # sleep 1 00:19:28.099 Running I/O for 10 seconds... 00:19:29.036 04:22:41 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.297 [2024-12-06 04:22:41.747763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.747993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f520 is same with the state(5) to be set 00:19:29.297 [2024-12-06 04:22:41.748197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.297 [2024-12-06 04:22:41.748227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.297 [2024-12-06 04:22:41.748248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.298 [2024-12-06 04:22:41.748850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.298 [2024-12-06 04:22:41.748892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-12-06 04:22:41.748913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.298 [2024-12-06 04:22:41.748934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.298 [2024-12-06 04:22:41.748945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.748955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.748966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.748975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.748987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.299 [2024-12-06 04:22:41.749620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-12-06 04:22:41.749641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.299 [2024-12-06 04:22:41.749652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.749915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.749965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.749985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.750179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.750206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.300 [2024-12-06 04:22:41.750293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.300 [2024-12-06 04:22:41.750390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-12-06 04:22:41.750403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.750473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.750610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.750640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.750666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.750937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.750980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.750991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.301 [2024-12-06 04:22:41.751022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.301 [2024-12-06 04:22:41.751212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.301 [2024-12-06 04:22:41.751222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.302 [2024-12-06 04:22:41.751231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.302 [2024-12-06 04:22:41.751250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.302 [2024-12-06 04:22:41.751282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.302 [2024-12-06 04:22:41.751302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:29.302 [2024-12-06 04:22:41.751331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.302 [2024-12-06 04:22:41.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.302 [2024-12-06 04:22:41.751370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.302 [2024-12-06 04:22:41.751390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644cf0 is same with the state(5) to be set 00:19:29.302 [2024-12-06 04:22:41.751412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:29.302 [2024-12-06 04:22:41.751438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:29.302 [2024-12-06 04:22:41.751454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122832 len:8 PRP1 0x0 PRP2 0x0 00:19:29.302 [2024-12-06 04:22:41.751463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751516] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x644cf0 was disconnected and freed. reset controller. 00:19:29.302 [2024-12-06 04:22:41.751619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.302 [2024-12-06 04:22:41.751636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.302 [2024-12-06 04:22:41.751655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.302 [2024-12-06 04:22:41.751673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.302 [2024-12-06 04:22:41.751690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.302 [2024-12-06 04:22:41.751699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:29.302 [2024-12-06 04:22:41.751944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.302 [2024-12-06 04:22:41.751984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:29.302 [2024-12-06 04:22:41.752101] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.302 [2024-12-06 04:22:41.752201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.302 [2024-12-06 04:22:41.752261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.302 [2024-12-06 04:22:41.752290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:29.302 [2024-12-06 04:22:41.752302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:29.302 [2024-12-06 04:22:41.752322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:29.302 [2024-12-06 04:22:41.752339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.302 [2024-12-06 04:22:41.752349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.302 04:22:41 -- host/timeout.sh@90 -- # sleep 1 00:19:29.302 [2024-12-06 04:22:41.765307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.302 [2024-12-06 04:22:41.765351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.302 [2024-12-06 04:22:41.765366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.238 [2024-12-06 04:22:42.765536] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.238 [2024-12-06 04:22:42.765654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.238 [2024-12-06 04:22:42.765700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:30.238 [2024-12-06 04:22:42.765733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:30.238 [2024-12-06 04:22:42.765746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:30.238 [2024-12-06 04:22:42.765772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:30.238 [2024-12-06 04:22:42.765791] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.238 [2024-12-06 04:22:42.765801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.238 [2024-12-06 04:22:42.765811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.238 [2024-12-06 04:22:42.765851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.238 [2024-12-06 04:22:42.765865] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.238 04:22:42 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.497 [2024-12-06 04:22:43.037905] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.497 04:22:43 -- host/timeout.sh@92 -- # wait 86179 00:19:31.431 [2024-12-06 04:22:43.785689] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:39.546 00:19:39.546 Latency(us) 00:19:39.546 [2024-12-06T04:22:52.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.546 [2024-12-06T04:22:52.111Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.546 Verification LBA range: start 0x0 length 0x4000 00:19:39.546 NVMe0n1 : 10.01 9905.24 38.69 0.00 0.00 12899.22 945.80 3019898.88 00:19:39.546 [2024-12-06T04:22:52.111Z] =================================================================================================================== 00:19:39.546 [2024-12-06T04:22:52.111Z] Total : 9905.24 38.69 0.00 0.00 12899.22 945.80 3019898.88 00:19:39.546 0 00:19:39.546 04:22:50 -- host/timeout.sh@97 -- # rpc_pid=86288 00:19:39.546 04:22:50 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.546 04:22:50 -- host/timeout.sh@98 -- # sleep 1 00:19:39.546 Running I/O for 10 seconds... 00:19:39.546 04:22:51 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.546 [2024-12-06 04:22:51.925178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75f10 is same with the state(5) to be set 00:19:39.546 [2024-12-06 04:22:51.925476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.546 [2024-12-06 04:22:51.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.546 [2024-12-06 04:22:51.925709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.925956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.925977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.925988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.925997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.547 [2024-12-06 04:22:51.926505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.547 [2024-12-06 04:22:51.926525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.547 [2024-12-06 04:22:51.926536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.926954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.926987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.926996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.927036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.548 [2024-12-06 04:22:51.927097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.548 [2024-12-06 04:22:51.927353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.548 [2024-12-06 04:22:51.927364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.927967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.927987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.927998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.549 [2024-12-06 04:22:51.928067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.549 [2024-12-06 04:22:51.928150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.549 [2024-12-06 04:22:51.928162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.550 [2024-12-06 04:22:51.928171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.550 [2024-12-06 04:22:51.928182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.550 [2024-12-06 04:22:51.928192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.550 [2024-12-06 04:22:51.928203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.550 [2024-12-06 04:22:51.928212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.550 [2024-12-06 04:22:51.928223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fae40 is same with the state(5) to be set 00:19:39.550 [2024-12-06 04:22:51.928237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:39.550 [2024-12-06 04:22:51.928250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:39.550 [2024-12-06 04:22:51.928259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7544 len:8 PRP1 0x0 PRP2 0x0 00:19:39.550 [2024-12-06 04:22:51.928269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.550 [2024-12-06 04:22:51.928322] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6fae40 was disconnected and freed. reset controller. 00:19:39.550 [2024-12-06 04:22:51.928561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.550 [2024-12-06 04:22:51.928643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:39.550 [2024-12-06 04:22:51.928767] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.550 [2024-12-06 04:22:51.928822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.550 [2024-12-06 04:22:51.928866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:39.550 [2024-12-06 04:22:51.928882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:39.550 [2024-12-06 04:22:51.928893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:39.550 [2024-12-06 04:22:51.928913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:39.550 [2024-12-06 04:22:51.928930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.550 [2024-12-06 04:22:51.928940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.550 [2024-12-06 04:22:51.928950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.550 [2024-12-06 04:22:51.928971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.550 [2024-12-06 04:22:51.928982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.550 04:22:51 -- host/timeout.sh@101 -- # sleep 3 00:19:40.485 [2024-12-06 04:22:52.929083] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.486 [2024-12-06 04:22:52.929199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.486 [2024-12-06 04:22:52.929242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.486 [2024-12-06 04:22:52.929258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:40.486 [2024-12-06 04:22:52.929287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:40.486 [2024-12-06 04:22:52.929343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:40.486 [2024-12-06 04:22:52.929363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.486 [2024-12-06 04:22:52.929373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.486 [2024-12-06 04:22:52.929383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.486 [2024-12-06 04:22:52.929410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.486 [2024-12-06 04:22:52.929422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.420 [2024-12-06 04:22:53.929611] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.420 [2024-12-06 04:22:53.929705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.420 [2024-12-06 04:22:53.929752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.420 [2024-12-06 04:22:53.929777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:41.420 [2024-12-06 04:22:53.929791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:41.420 [2024-12-06 04:22:53.929817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:41.420 [2024-12-06 04:22:53.929838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.420 [2024-12-06 04:22:53.929848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.420 [2024-12-06 04:22:53.929858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.420 [2024-12-06 04:22:53.929887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.420 [2024-12-06 04:22:53.929906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.796 [2024-12-06 04:22:54.931620] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.796 [2024-12-06 04:22:54.931743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.796 [2024-12-06 04:22:54.931787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.796 [2024-12-06 04:22:54.931803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f4c20 with addr=10.0.0.2, port=4420 00:19:42.796 [2024-12-06 04:22:54.931817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f4c20 is same with the state(5) to be set 00:19:42.796 [2024-12-06 04:22:54.932035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f4c20 (9): Bad file descriptor 00:19:42.796 [2024-12-06 04:22:54.932173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.796 [2024-12-06 04:22:54.932193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:42.796 [2024-12-06 04:22:54.932204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.796 [2024-12-06 04:22:54.934677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.796 [2024-12-06 04:22:54.934725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.796 04:22:54 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.796 [2024-12-06 04:22:55.217898] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.796 04:22:55 -- host/timeout.sh@103 -- # wait 86288 00:19:43.728 [2024-12-06 04:22:55.952155] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:49.037 00:19:49.037 Latency(us) 00:19:49.037 [2024-12-06T04:23:01.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.037 [2024-12-06T04:23:01.602Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:49.037 Verification LBA range: start 0x0 length 0x4000 00:19:49.037 NVMe0n1 : 10.01 8721.40 34.07 6159.04 0.00 8588.52 441.25 3019898.88 00:19:49.037 [2024-12-06T04:23:01.602Z] =================================================================================================================== 00:19:49.037 [2024-12-06T04:23:01.602Z] Total : 8721.40 34.07 6159.04 0.00 8588.52 0.00 3019898.88 00:19:49.037 0 00:19:49.037 04:23:00 -- host/timeout.sh@105 -- # killprocess 86160 00:19:49.037 04:23:00 -- common/autotest_common.sh@936 -- # '[' -z 86160 ']' 00:19:49.037 04:23:00 -- common/autotest_common.sh@940 -- # kill -0 86160 00:19:49.037 04:23:00 -- common/autotest_common.sh@941 -- # uname 00:19:49.037 04:23:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.037 04:23:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86160 00:19:49.037 killing process with pid 86160 00:19:49.037 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.037 00:19:49.037 Latency(us) 00:19:49.037 [2024-12-06T04:23:01.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.037 [2024-12-06T04:23:01.602Z] =================================================================================================================== 00:19:49.037 [2024-12-06T04:23:01.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.037 04:23:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:49.037 04:23:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:49.037 04:23:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86160' 00:19:49.037 04:23:00 -- common/autotest_common.sh@955 -- # kill 86160 00:19:49.037 04:23:00 -- common/autotest_common.sh@960 -- # wait 86160 00:19:49.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.037 04:23:01 -- host/timeout.sh@110 -- # bdevperf_pid=86404 00:19:49.037 04:23:01 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:49.037 04:23:01 -- host/timeout.sh@112 -- # waitforlisten 86404 /var/tmp/bdevperf.sock 00:19:49.037 04:23:01 -- common/autotest_common.sh@829 -- # '[' -z 86404 ']' 00:19:49.037 04:23:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.037 04:23:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.037 04:23:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.037 04:23:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.037 04:23:01 -- common/autotest_common.sh@10 -- # set +x 00:19:49.037 [2024-12-06 04:23:01.042823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:49.037 [2024-12-06 04:23:01.043143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86404 ] 00:19:49.037 [2024-12-06 04:23:01.175826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.037 [2024-12-06 04:23:01.238857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.603 04:23:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.603 04:23:02 -- common/autotest_common.sh@862 -- # return 0 00:19:49.603 04:23:02 -- host/timeout.sh@116 -- # dtrace_pid=86420 00:19:49.603 04:23:02 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86404 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:49.603 04:23:02 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:49.861 04:23:02 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:50.119 NVMe0n1 00:19:50.119 04:23:02 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.119 04:23:02 -- host/timeout.sh@124 -- # rpc_pid=86460 00:19:50.119 04:23:02 -- host/timeout.sh@125 -- # sleep 1 00:19:50.378 Running I/O for 10 seconds... 00:19:51.314 04:23:03 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.314 [2024-12-06 04:23:03.823799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.823997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824289] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.314 [2024-12-06 04:23:03.824390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824592] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824607] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.824965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd742f0 is same with the state(5) to be set 00:19:51.315 [2024-12-06 04:23:03.825054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.315 [2024-12-06 04:23:03.825249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.315 [2024-12-06 04:23:03.825259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.825985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.825996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.826005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.826015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.826024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.826035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.826043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.826053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.826062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.826073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.316 [2024-12-06 04:23:03.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.316 [2024-12-06 04:23:03.826097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.317 [2024-12-06 04:23:03.826921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.317 [2024-12-06 04:23:03.826930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.826941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.826950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.826961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.826969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.826980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.826989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.318 [2024-12-06 04:23:03.827463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.318 [2024-12-06 04:23:03.827471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.319 [2024-12-06 04:23:03.827728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a2070 is same with the state(5) to be set 00:19:51.319 [2024-12-06 04:23:03.827749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.319 [2024-12-06 04:23:03.827776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.319 [2024-12-06 04:23:03.827785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114328 len:8 PRP1 0x0 PRP2 0x0 00:19:51.319 [2024-12-06 04:23:03.827794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827846] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a2070 was disconnected and freed. reset controller. 00:19:51.319 [2024-12-06 04:23:03.827942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.319 [2024-12-06 04:23:03.827958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.319 [2024-12-06 04:23:03.827978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.827993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.319 [2024-12-06 04:23:03.828003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.828012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.319 [2024-12-06 04:23:03.828021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.319 [2024-12-06 04:23:03.828030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66fea0 is same with the state(5) to be set 00:19:51.319 [2024-12-06 04:23:03.828281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.319 [2024-12-06 04:23:03.828325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66fea0 (9): Bad file descriptor 00:19:51.319 [2024-12-06 04:23:03.828440] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.319 [2024-12-06 04:23:03.828513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.319 [2024-12-06 04:23:03.828562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.319 [2024-12-06 04:23:03.828578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66fea0 with addr=10.0.0.2, port=4420 00:19:51.319 [2024-12-06 04:23:03.828588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66fea0 is same with the state(5) to be set 00:19:51.319 [2024-12-06 04:23:03.828608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66fea0 (9): Bad file descriptor 00:19:51.319 [2024-12-06 04:23:03.828624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.319 [2024-12-06 04:23:03.828633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:51.319 [2024-12-06 04:23:03.841707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.319 [2024-12-06 04:23:03.841799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.319 [2024-12-06 04:23:03.841819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.319 04:23:03 -- host/timeout.sh@128 -- # wait 86460 00:19:53.869 [2024-12-06 04:23:05.842037] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.869 [2024-12-06 04:23:05.842151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.869 [2024-12-06 04:23:05.842208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.869 [2024-12-06 04:23:05.842223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66fea0 with addr=10.0.0.2, port=4420 00:19:53.869 [2024-12-06 04:23:05.842235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66fea0 is same with the state(5) to be set 00:19:53.869 [2024-12-06 04:23:05.842258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66fea0 (9): Bad file descriptor 00:19:53.869 [2024-12-06 04:23:05.842275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.869 [2024-12-06 04:23:05.842285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.869 [2024-12-06 04:23:05.842295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.869 [2024-12-06 04:23:05.842319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.869 [2024-12-06 04:23:05.842329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.772 [2024-12-06 04:23:07.842536] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.772 [2024-12-06 04:23:07.842682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.772 [2024-12-06 04:23:07.842725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.772 [2024-12-06 04:23:07.842741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x66fea0 with addr=10.0.0.2, port=4420 00:19:55.772 [2024-12-06 04:23:07.842754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66fea0 is same with the state(5) to be set 00:19:55.772 [2024-12-06 04:23:07.842778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66fea0 (9): Bad file descriptor 00:19:55.772 [2024-12-06 04:23:07.842796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.772 [2024-12-06 04:23:07.842806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.772 [2024-12-06 04:23:07.842816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.772 [2024-12-06 04:23:07.842843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.772 [2024-12-06 04:23:07.842853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.675 [2024-12-06 04:23:09.842969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.675 [2024-12-06 04:23:09.843030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.676 [2024-12-06 04:23:09.843042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.676 [2024-12-06 04:23:09.843052] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:57.676 [2024-12-06 04:23:09.843080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.612 00:19:58.612 Latency(us) 00:19:58.612 [2024-12-06T04:23:11.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.612 [2024-12-06T04:23:11.177Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:58.612 NVMe0n1 : 8.13 2218.78 8.67 15.75 0.00 57188.93 7268.54 7046430.72 00:19:58.612 [2024-12-06T04:23:11.177Z] =================================================================================================================== 00:19:58.612 [2024-12-06T04:23:11.177Z] Total : 2218.78 8.67 15.75 0.00 57188.93 7268.54 7046430.72 00:19:58.612 0 00:19:58.612 04:23:10 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.612 Attaching 5 probes... 00:19:58.612 1272.787244: reset bdev controller NVMe0 00:19:58.612 1272.876724: reconnect bdev controller NVMe0 00:19:58.612 3286.388546: reconnect delay bdev controller NVMe0 00:19:58.612 3286.460262: reconnect bdev controller NVMe0 00:19:58.612 5286.914722: reconnect delay bdev controller NVMe0 00:19:58.612 5286.953787: reconnect bdev controller NVMe0 00:19:58.612 7287.424695: reconnect delay bdev controller NVMe0 00:19:58.612 7287.468617: reconnect bdev controller NVMe0 00:19:58.612 04:23:10 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:58.612 04:23:10 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:58.612 04:23:10 -- host/timeout.sh@136 -- # kill 86420 00:19:58.612 04:23:10 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.612 04:23:10 -- host/timeout.sh@139 -- # killprocess 86404 00:19:58.612 04:23:10 -- common/autotest_common.sh@936 -- # '[' -z 86404 ']' 00:19:58.612 04:23:10 -- common/autotest_common.sh@940 -- # kill -0 86404 00:19:58.612 04:23:10 -- common/autotest_common.sh@941 -- # uname 00:19:58.612 04:23:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.612 04:23:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86404 00:19:58.612 killing process with pid 86404 00:19:58.612 Received shutdown signal, test time was about 8.200193 seconds 00:19:58.612 00:19:58.612 Latency(us) 00:19:58.612 [2024-12-06T04:23:11.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.612 [2024-12-06T04:23:11.177Z] =================================================================================================================== 00:19:58.612 [2024-12-06T04:23:11.177Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.612 04:23:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:58.612 04:23:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:58.612 04:23:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86404' 00:19:58.612 04:23:10 -- common/autotest_common.sh@955 -- # kill 86404 00:19:58.612 04:23:10 -- common/autotest_common.sh@960 -- # wait 86404 00:19:58.612 04:23:11 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:58.871 04:23:11 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:58.871 04:23:11 -- host/timeout.sh@145 -- # nvmftestfini 00:19:58.871 04:23:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:58.871 04:23:11 -- nvmf/common.sh@116 -- # sync 00:19:58.871 04:23:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:58.871 04:23:11 -- nvmf/common.sh@119 -- # set +e 00:19:58.871 04:23:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:58.871 04:23:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:58.871 rmmod nvme_tcp 00:19:59.131 rmmod nvme_fabrics 00:19:59.131 rmmod nvme_keyring 00:19:59.131 04:23:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:59.131 04:23:11 -- nvmf/common.sh@123 -- # set -e 00:19:59.131 04:23:11 -- nvmf/common.sh@124 -- # return 0 00:19:59.131 04:23:11 -- nvmf/common.sh@477 -- # '[' -n 85960 ']' 00:19:59.131 04:23:11 -- nvmf/common.sh@478 -- # killprocess 85960 00:19:59.131 04:23:11 -- common/autotest_common.sh@936 -- # '[' -z 85960 ']' 00:19:59.131 04:23:11 -- common/autotest_common.sh@940 -- # kill -0 85960 00:19:59.131 04:23:11 -- common/autotest_common.sh@941 -- # uname 00:19:59.131 04:23:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.131 04:23:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85960 00:19:59.131 killing process with pid 85960 00:19:59.131 04:23:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.131 04:23:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.131 04:23:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85960' 00:19:59.131 04:23:11 -- common/autotest_common.sh@955 -- # kill 85960 00:19:59.131 04:23:11 -- common/autotest_common.sh@960 -- # wait 85960 00:19:59.390 04:23:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:59.390 04:23:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:59.390 04:23:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:59.390 04:23:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.390 04:23:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:59.390 04:23:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.390 04:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.390 04:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.390 04:23:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:59.390 00:19:59.390 real 0m47.236s 00:19:59.390 user 2m18.993s 00:19:59.390 sys 0m5.553s 00:19:59.390 04:23:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.390 04:23:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.390 ************************************ 00:19:59.390 END TEST nvmf_timeout 00:19:59.390 ************************************ 00:19:59.390 04:23:11 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:59.390 04:23:11 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:59.390 04:23:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:59.390 04:23:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.390 04:23:11 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:59.390 00:19:59.390 real 10m46.198s 00:19:59.390 user 30m7.783s 00:19:59.390 sys 3m19.316s 00:19:59.390 04:23:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:59.390 04:23:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.390 ************************************ 00:19:59.390 END TEST nvmf_tcp 00:19:59.390 ************************************ 00:19:59.390 04:23:11 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:19:59.390 04:23:11 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:59.390 04:23:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:59.390 04:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.390 04:23:11 -- common/autotest_common.sh@10 -- # set +x 00:19:59.390 ************************************ 00:19:59.390 START TEST nvmf_dif 00:19:59.390 ************************************ 00:19:59.390 04:23:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:59.649 * Looking for test storage... 00:19:59.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:59.649 04:23:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:59.649 04:23:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:59.649 04:23:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:59.649 04:23:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:59.649 04:23:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:59.649 04:23:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:59.649 04:23:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:59.649 04:23:12 -- scripts/common.sh@335 -- # IFS=.-: 00:19:59.649 04:23:12 -- scripts/common.sh@335 -- # read -ra ver1 00:19:59.649 04:23:12 -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.650 04:23:12 -- scripts/common.sh@336 -- # read -ra ver2 00:19:59.650 04:23:12 -- scripts/common.sh@337 -- # local 'op=<' 00:19:59.650 04:23:12 -- scripts/common.sh@339 -- # ver1_l=2 00:19:59.650 04:23:12 -- scripts/common.sh@340 -- # ver2_l=1 00:19:59.650 04:23:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:59.650 04:23:12 -- scripts/common.sh@343 -- # case "$op" in 00:19:59.650 04:23:12 -- scripts/common.sh@344 -- # : 1 00:19:59.650 04:23:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:59.650 04:23:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.650 04:23:12 -- scripts/common.sh@364 -- # decimal 1 00:19:59.650 04:23:12 -- scripts/common.sh@352 -- # local d=1 00:19:59.650 04:23:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.650 04:23:12 -- scripts/common.sh@354 -- # echo 1 00:19:59.650 04:23:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:59.650 04:23:12 -- scripts/common.sh@365 -- # decimal 2 00:19:59.650 04:23:12 -- scripts/common.sh@352 -- # local d=2 00:19:59.650 04:23:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.650 04:23:12 -- scripts/common.sh@354 -- # echo 2 00:19:59.650 04:23:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:59.650 04:23:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:59.650 04:23:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:59.650 04:23:12 -- scripts/common.sh@367 -- # return 0 00:19:59.650 04:23:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.650 04:23:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:59.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.650 --rc genhtml_branch_coverage=1 00:19:59.650 --rc genhtml_function_coverage=1 00:19:59.650 --rc genhtml_legend=1 00:19:59.650 --rc geninfo_all_blocks=1 00:19:59.650 --rc geninfo_unexecuted_blocks=1 00:19:59.650 00:19:59.650 ' 00:19:59.650 04:23:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:59.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.650 --rc genhtml_branch_coverage=1 00:19:59.650 --rc genhtml_function_coverage=1 00:19:59.650 --rc genhtml_legend=1 00:19:59.650 --rc geninfo_all_blocks=1 00:19:59.650 --rc geninfo_unexecuted_blocks=1 00:19:59.650 00:19:59.650 ' 00:19:59.650 04:23:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:59.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.650 --rc genhtml_branch_coverage=1 00:19:59.650 --rc genhtml_function_coverage=1 00:19:59.650 --rc genhtml_legend=1 00:19:59.650 --rc geninfo_all_blocks=1 00:19:59.650 --rc geninfo_unexecuted_blocks=1 00:19:59.650 00:19:59.650 ' 00:19:59.650 04:23:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:59.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.650 --rc genhtml_branch_coverage=1 00:19:59.650 --rc genhtml_function_coverage=1 00:19:59.650 --rc genhtml_legend=1 00:19:59.650 --rc geninfo_all_blocks=1 00:19:59.650 --rc geninfo_unexecuted_blocks=1 00:19:59.650 00:19:59.650 ' 00:19:59.650 04:23:12 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.650 04:23:12 -- nvmf/common.sh@7 -- # uname -s 00:19:59.650 04:23:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.650 04:23:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.650 04:23:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.650 04:23:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.650 04:23:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.650 04:23:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.650 04:23:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.650 04:23:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.650 04:23:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.650 04:23:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:19:59.650 04:23:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:19:59.650 04:23:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.650 04:23:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.650 04:23:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.650 04:23:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.650 04:23:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.650 04:23:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.650 04:23:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.650 04:23:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.650 04:23:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.650 04:23:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.650 04:23:12 -- paths/export.sh@5 -- # export PATH 00:19:59.650 04:23:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.650 04:23:12 -- nvmf/common.sh@46 -- # : 0 00:19:59.650 04:23:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:59.650 04:23:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:59.650 04:23:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:59.650 04:23:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.650 04:23:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.650 04:23:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:59.650 04:23:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:59.650 04:23:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:59.650 04:23:12 -- target/dif.sh@15 -- # NULL_META=16 00:19:59.650 04:23:12 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:59.650 04:23:12 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:59.650 04:23:12 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:59.650 04:23:12 -- target/dif.sh@135 -- # nvmftestinit 00:19:59.650 04:23:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:59.650 04:23:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.650 04:23:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:59.650 04:23:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:59.650 04:23:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:59.650 04:23:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.650 04:23:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:59.650 04:23:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.650 04:23:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:59.650 04:23:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:59.650 04:23:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.650 04:23:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.650 04:23:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.650 04:23:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:59.650 04:23:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.650 04:23:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.650 04:23:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.650 04:23:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.650 04:23:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.650 04:23:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.650 04:23:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.650 04:23:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.650 04:23:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:59.650 04:23:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:59.650 Cannot find device "nvmf_tgt_br" 00:19:59.650 04:23:12 -- nvmf/common.sh@154 -- # true 00:19:59.650 04:23:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.650 Cannot find device "nvmf_tgt_br2" 00:19:59.650 04:23:12 -- nvmf/common.sh@155 -- # true 00:19:59.650 04:23:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:59.650 04:23:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:59.650 Cannot find device "nvmf_tgt_br" 00:19:59.650 04:23:12 -- nvmf/common.sh@157 -- # true 00:19:59.650 04:23:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:59.650 Cannot find device "nvmf_tgt_br2" 00:19:59.650 04:23:12 -- nvmf/common.sh@158 -- # true 00:19:59.650 04:23:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:59.910 04:23:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:59.910 04:23:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.910 04:23:12 -- nvmf/common.sh@161 -- # true 00:19:59.910 04:23:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.910 04:23:12 -- nvmf/common.sh@162 -- # true 00:19:59.910 04:23:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.910 04:23:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.910 04:23:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.910 04:23:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.910 04:23:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.910 04:23:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.910 04:23:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.910 04:23:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.910 04:23:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.910 04:23:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:59.910 04:23:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:59.910 04:23:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:59.910 04:23:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:59.910 04:23:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.910 04:23:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.910 04:23:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.910 04:23:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:59.910 04:23:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:59.910 04:23:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.910 04:23:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:59.910 04:23:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:59.910 04:23:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:59.910 04:23:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:59.910 04:23:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:59.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:19:59.910 00:19:59.910 --- 10.0.0.2 ping statistics --- 00:19:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.910 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:59.910 04:23:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:59.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:59.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:59.910 00:19:59.910 --- 10.0.0.3 ping statistics --- 00:19:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.910 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:59.910 04:23:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:59.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:59.910 00:19:59.910 --- 10.0.0.1 ping statistics --- 00:19:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.910 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:59.910 04:23:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.910 04:23:12 -- nvmf/common.sh@421 -- # return 0 00:19:59.910 04:23:12 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:59.910 04:23:12 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:00.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:00.478 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:00.478 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:00.478 04:23:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.478 04:23:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.478 04:23:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.478 04:23:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.478 04:23:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.478 04:23:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.478 04:23:12 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:00.478 04:23:12 -- target/dif.sh@137 -- # nvmfappstart 00:20:00.478 04:23:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.478 04:23:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.478 04:23:12 -- common/autotest_common.sh@10 -- # set +x 00:20:00.478 04:23:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:00.478 04:23:12 -- nvmf/common.sh@469 -- # nvmfpid=86912 00:20:00.478 04:23:12 -- nvmf/common.sh@470 -- # waitforlisten 86912 00:20:00.478 04:23:12 -- common/autotest_common.sh@829 -- # '[' -z 86912 ']' 00:20:00.478 04:23:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.478 04:23:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.478 04:23:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.478 04:23:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.478 04:23:12 -- common/autotest_common.sh@10 -- # set +x 00:20:00.478 [2024-12-06 04:23:12.892114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:00.478 [2024-12-06 04:23:12.892209] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.478 [2024-12-06 04:23:13.034884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.736 [2024-12-06 04:23:13.112209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.736 [2024-12-06 04:23:13.112415] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.736 [2024-12-06 04:23:13.112433] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.736 [2024-12-06 04:23:13.112445] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.736 [2024-12-06 04:23:13.112483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.672 04:23:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.672 04:23:13 -- common/autotest_common.sh@862 -- # return 0 00:20:01.672 04:23:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.672 04:23:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:01.672 04:23:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 04:23:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.672 04:23:13 -- target/dif.sh@139 -- # create_transport 00:20:01.672 04:23:13 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:01.672 04:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.672 04:23:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 [2024-12-06 04:23:13.967829] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.672 04:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.672 04:23:13 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:01.672 04:23:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:01.672 04:23:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.672 04:23:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 ************************************ 00:20:01.672 START TEST fio_dif_1_default 00:20:01.672 ************************************ 00:20:01.672 04:23:13 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:20:01.672 04:23:13 -- target/dif.sh@86 -- # create_subsystems 0 00:20:01.672 04:23:13 -- target/dif.sh@28 -- # local sub 00:20:01.672 04:23:13 -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.672 04:23:13 -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.672 04:23:13 -- target/dif.sh@18 -- # local sub_id=0 00:20:01.672 04:23:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:01.672 04:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.672 04:23:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 bdev_null0 00:20:01.672 04:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.672 04:23:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.672 04:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.672 04:23:13 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 04:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.672 04:23:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.672 04:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.672 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 04:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.672 04:23:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.672 04:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.672 04:23:14 -- common/autotest_common.sh@10 -- # set +x 00:20:01.672 [2024-12-06 04:23:14.015954] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.672 04:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.672 04:23:14 -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:01.672 04:23:14 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:01.672 04:23:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:01.672 04:23:14 -- nvmf/common.sh@520 -- # config=() 00:20:01.672 04:23:14 -- nvmf/common.sh@520 -- # local subsystem config 00:20:01.672 04:23:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:01.672 04:23:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.672 04:23:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:01.672 { 00:20:01.672 "params": { 00:20:01.672 "name": "Nvme$subsystem", 00:20:01.672 "trtype": "$TEST_TRANSPORT", 00:20:01.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.672 "adrfam": "ipv4", 00:20:01.672 "trsvcid": "$NVMF_PORT", 00:20:01.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.672 "hdgst": ${hdgst:-false}, 00:20:01.672 "ddgst": ${ddgst:-false} 00:20:01.672 }, 00:20:01.672 "method": "bdev_nvme_attach_controller" 00:20:01.672 } 00:20:01.672 EOF 00:20:01.672 )") 00:20:01.672 04:23:14 -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.672 04:23:14 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.672 04:23:14 -- target/dif.sh@54 -- # local file 00:20:01.672 04:23:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:01.672 04:23:14 -- target/dif.sh@56 -- # cat 00:20:01.672 04:23:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.672 04:23:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:01.672 04:23:14 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.672 04:23:14 -- common/autotest_common.sh@1330 -- # shift 00:20:01.672 04:23:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:01.672 04:23:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.672 04:23:14 -- nvmf/common.sh@542 -- # cat 00:20:01.672 04:23:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.672 04:23:14 -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.672 04:23:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:01.672 04:23:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.672 04:23:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:01.672 04:23:14 -- nvmf/common.sh@544 -- # jq . 00:20:01.672 04:23:14 -- nvmf/common.sh@545 -- # IFS=, 00:20:01.672 04:23:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:01.672 "params": { 00:20:01.672 "name": "Nvme0", 00:20:01.673 "trtype": "tcp", 00:20:01.673 "traddr": "10.0.0.2", 00:20:01.673 "adrfam": "ipv4", 00:20:01.673 "trsvcid": "4420", 00:20:01.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.673 "hdgst": false, 00:20:01.673 "ddgst": false 00:20:01.673 }, 00:20:01.673 "method": "bdev_nvme_attach_controller" 00:20:01.673 }' 00:20:01.673 04:23:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:01.673 04:23:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:01.673 04:23:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.673 04:23:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.673 04:23:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:01.673 04:23:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:01.673 04:23:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:01.673 04:23:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:01.673 04:23:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.673 04:23:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.932 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:01.932 fio-3.35 00:20:01.932 Starting 1 thread 00:20:02.190 [2024-12-06 04:23:14.610205] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:02.190 [2024-12-06 04:23:14.610883] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:12.188 00:20:12.188 filename0: (groupid=0, jobs=1): err= 0: pid=86984: Fri Dec 6 04:23:24 2024 00:20:12.188 read: IOPS=9455, BW=36.9MiB/s (38.7MB/s)(369MiB/10001msec) 00:20:12.188 slat (nsec): min=5716, max=66947, avg=7927.06, stdev=3541.79 00:20:12.188 clat (usec): min=307, max=2818, avg=399.81, stdev=50.13 00:20:12.188 lat (usec): min=313, max=2837, avg=407.73, stdev=50.93 00:20:12.188 clat percentiles (usec): 00:20:12.188 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 363], 00:20:12.188 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 404], 00:20:12.188 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 486], 00:20:12.188 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 644], 00:20:12.188 | 99.99th=[ 1565] 00:20:12.188 bw ( KiB/s): min=35808, max=41024, per=100.00%, avg=37886.32, stdev=1182.68, samples=19 00:20:12.188 iops : min= 8952, max=10256, avg=9471.58, stdev=295.67, samples=19 00:20:12.188 lat (usec) : 500=97.38%, 750=2.60%, 1000=0.01% 00:20:12.188 lat (msec) : 2=0.01%, 4=0.01% 00:20:12.188 cpu : usr=85.56%, sys=12.70%, ctx=32, majf=0, minf=0 00:20:12.188 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.188 issued rwts: total=94560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.188 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:12.188 00:20:12.188 Run status group 0 (all jobs): 00:20:12.188 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=369MiB (387MB), run=10001-10001msec 00:20:12.447 04:23:24 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:12.447 04:23:24 -- target/dif.sh@43 -- # local sub 00:20:12.447 04:23:24 -- target/dif.sh@45 -- # for sub in "$@" 00:20:12.447 04:23:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:12.447 04:23:24 -- target/dif.sh@36 -- # local sub_id=0 00:20:12.447 04:23:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:12.447 04:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.447 04:23:24 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 04:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.447 04:23:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:12.447 04:23:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.447 04:23:24 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 04:23:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.447 00:20:12.447 real 0m10.966s 00:20:12.447 user 0m9.147s 00:20:12.447 sys 0m1.556s 00:20:12.447 04:23:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:12.447 ************************************ 00:20:12.447 END TEST fio_dif_1_default 00:20:12.447 04:23:24 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 ************************************ 00:20:12.447 04:23:24 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:12.447 04:23:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:12.447 04:23:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.447 04:23:24 -- common/autotest_common.sh@10 -- # set +x 00:20:12.447 ************************************ 00:20:12.447 START TEST fio_dif_1_multi_subsystems 00:20:12.447 ************************************ 00:20:12.447 04:23:25 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:20:12.447 04:23:25 -- target/dif.sh@92 -- # local files=1 00:20:12.447 04:23:25 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:12.447 04:23:25 -- target/dif.sh@28 -- # local sub 00:20:12.447 04:23:25 -- target/dif.sh@30 -- # for sub in "$@" 00:20:12.447 04:23:25 -- target/dif.sh@31 -- # create_subsystem 0 00:20:12.447 04:23:25 -- target/dif.sh@18 -- # local sub_id=0 00:20:12.447 04:23:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:12.447 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.447 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 bdev_null0 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 [2024-12-06 04:23:25.033235] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@30 -- # for sub in "$@" 00:20:12.710 04:23:25 -- target/dif.sh@31 -- # create_subsystem 1 00:20:12.710 04:23:25 -- target/dif.sh@18 -- # local sub_id=1 00:20:12.710 04:23:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 bdev_null1 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.710 04:23:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.710 04:23:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.710 04:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:12.710 04:23:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.711 04:23:25 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:12.711 04:23:25 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:12.711 04:23:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:12.711 04:23:25 -- nvmf/common.sh@520 -- # config=() 00:20:12.711 04:23:25 -- nvmf/common.sh@520 -- # local subsystem config 00:20:12.711 04:23:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:12.711 04:23:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.711 04:23:25 -- target/dif.sh@82 -- # gen_fio_conf 00:20:12.711 04:23:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:12.711 { 00:20:12.711 "params": { 00:20:12.711 "name": "Nvme$subsystem", 00:20:12.711 "trtype": "$TEST_TRANSPORT", 00:20:12.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.711 "adrfam": "ipv4", 00:20:12.711 "trsvcid": "$NVMF_PORT", 00:20:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.711 "hdgst": ${hdgst:-false}, 00:20:12.711 "ddgst": ${ddgst:-false} 00:20:12.711 }, 00:20:12.711 "method": "bdev_nvme_attach_controller" 00:20:12.711 } 00:20:12.711 EOF 00:20:12.711 )") 00:20:12.711 04:23:25 -- target/dif.sh@54 -- # local file 00:20:12.711 04:23:25 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.711 04:23:25 -- target/dif.sh@56 -- # cat 00:20:12.711 04:23:25 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:12.711 04:23:25 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.711 04:23:25 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:12.711 04:23:25 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.711 04:23:25 -- common/autotest_common.sh@1330 -- # shift 00:20:12.711 04:23:25 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:12.711 04:23:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.711 04:23:25 -- nvmf/common.sh@542 -- # cat 00:20:12.711 04:23:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:12.711 04:23:25 -- target/dif.sh@72 -- # (( file <= files )) 00:20:12.711 04:23:25 -- target/dif.sh@73 -- # cat 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:12.711 04:23:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:12.711 04:23:25 -- target/dif.sh@72 -- # (( file++ )) 00:20:12.711 04:23:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:12.711 { 00:20:12.711 "params": { 00:20:12.711 "name": "Nvme$subsystem", 00:20:12.711 "trtype": "$TEST_TRANSPORT", 00:20:12.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.711 "adrfam": "ipv4", 00:20:12.711 "trsvcid": "$NVMF_PORT", 00:20:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.711 "hdgst": ${hdgst:-false}, 00:20:12.711 "ddgst": ${ddgst:-false} 00:20:12.711 }, 00:20:12.711 "method": "bdev_nvme_attach_controller" 00:20:12.711 } 00:20:12.711 EOF 00:20:12.711 )") 00:20:12.711 04:23:25 -- target/dif.sh@72 -- # (( file <= files )) 00:20:12.711 04:23:25 -- nvmf/common.sh@542 -- # cat 00:20:12.711 04:23:25 -- nvmf/common.sh@544 -- # jq . 00:20:12.711 04:23:25 -- nvmf/common.sh@545 -- # IFS=, 00:20:12.711 04:23:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:12.711 "params": { 00:20:12.711 "name": "Nvme0", 00:20:12.711 "trtype": "tcp", 00:20:12.711 "traddr": "10.0.0.2", 00:20:12.711 "adrfam": "ipv4", 00:20:12.711 "trsvcid": "4420", 00:20:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:12.711 "hdgst": false, 00:20:12.711 "ddgst": false 00:20:12.711 }, 00:20:12.711 "method": "bdev_nvme_attach_controller" 00:20:12.711 },{ 00:20:12.711 "params": { 00:20:12.711 "name": "Nvme1", 00:20:12.711 "trtype": "tcp", 00:20:12.711 "traddr": "10.0.0.2", 00:20:12.711 "adrfam": "ipv4", 00:20:12.711 "trsvcid": "4420", 00:20:12.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.711 "hdgst": false, 00:20:12.711 "ddgst": false 00:20:12.711 }, 00:20:12.711 "method": "bdev_nvme_attach_controller" 00:20:12.711 }' 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:12.711 04:23:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:12.711 04:23:25 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:12.711 04:23:25 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:12.711 04:23:25 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:12.711 04:23:25 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.711 04:23:25 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:12.969 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:12.969 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:12.969 fio-3.35 00:20:12.969 Starting 2 threads 00:20:13.226 [2024-12-06 04:23:25.711512] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:13.226 [2024-12-06 04:23:25.711594] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:25.417 00:20:25.417 filename0: (groupid=0, jobs=1): err= 0: pid=87145: Fri Dec 6 04:23:35 2024 00:20:25.417 read: IOPS=5090, BW=19.9MiB/s (20.9MB/s)(199MiB/10001msec) 00:20:25.417 slat (nsec): min=5882, max=70910, avg=13126.93, stdev=5025.85 00:20:25.417 clat (usec): min=561, max=3057, avg=750.42, stdev=79.41 00:20:25.417 lat (usec): min=568, max=3067, avg=763.54, stdev=80.18 00:20:25.417 clat percentiles (usec): 00:20:25.417 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 693], 00:20:25.417 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:20:25.417 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 840], 95.00th=[ 873], 00:20:25.417 | 99.00th=[ 955], 99.50th=[ 1004], 99.90th=[ 1450], 99.95th=[ 1565], 00:20:25.417 | 99.99th=[ 2540] 00:20:25.417 bw ( KiB/s): min=19289, max=22514, per=50.05%, avg=20382.89, stdev=772.15, samples=19 00:20:25.417 iops : min= 4822, max= 5628, avg=5095.68, stdev=192.98, samples=19 00:20:25.417 lat (usec) : 750=53.97%, 1000=45.54% 00:20:25.417 lat (msec) : 2=0.48%, 4=0.02% 00:20:25.417 cpu : usr=89.23%, sys=9.40%, ctx=7, majf=0, minf=0 00:20:25.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.417 issued rwts: total=50911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:25.417 filename1: (groupid=0, jobs=1): err= 0: pid=87146: Fri Dec 6 04:23:35 2024 00:20:25.417 read: IOPS=5090, BW=19.9MiB/s (20.9MB/s)(199MiB/10001msec) 00:20:25.417 slat (nsec): min=5766, max=71915, avg=13351.50, stdev=5065.41 00:20:25.417 clat (usec): min=588, max=3054, avg=748.65, stdev=76.97 00:20:25.417 lat (usec): min=594, max=3067, avg=762.00, stdev=77.58 00:20:25.417 clat percentiles (usec): 00:20:25.417 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 693], 00:20:25.417 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 758], 00:20:25.417 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 865], 00:20:25.417 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1450], 99.95th=[ 1549], 00:20:25.417 | 99.99th=[ 2540] 00:20:25.417 bw ( KiB/s): min=19289, max=22514, per=50.05%, avg=20382.89, stdev=772.15, samples=19 00:20:25.417 iops : min= 4822, max= 5628, avg=5095.68, stdev=192.98, samples=19 00:20:25.417 lat (usec) : 750=55.46%, 1000=44.10% 00:20:25.417 lat (msec) : 2=0.42%, 4=0.02% 00:20:25.417 cpu : usr=90.06%, sys=8.61%, ctx=8, majf=0, minf=0 00:20:25.417 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:25.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.417 issued rwts: total=50912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.417 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:25.417 00:20:25.417 Run status group 0 (all jobs): 00:20:25.417 READ: bw=39.8MiB/s (41.7MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=398MiB (417MB), run=10001-10001msec 00:20:25.417 04:23:36 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:25.417 04:23:36 -- target/dif.sh@43 -- # local sub 00:20:25.417 04:23:36 -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.417 04:23:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:25.417 04:23:36 -- target/dif.sh@36 -- # local sub_id=0 00:20:25.417 04:23:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:25.417 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.417 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.417 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.417 04:23:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:25.417 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.417 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.417 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.417 04:23:36 -- target/dif.sh@45 -- # for sub in "$@" 00:20:25.417 04:23:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:25.417 04:23:36 -- target/dif.sh@36 -- # local sub_id=1 00:20:25.417 04:23:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.417 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 04:23:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:25.418 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 00:20:25.418 real 0m11.061s 00:20:25.418 user 0m18.645s 00:20:25.418 sys 0m2.105s 00:20:25.418 04:23:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:25.418 ************************************ 00:20:25.418 END TEST fio_dif_1_multi_subsystems 00:20:25.418 ************************************ 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 04:23:36 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:25.418 04:23:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:25.418 04:23:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 ************************************ 00:20:25.418 START TEST fio_dif_rand_params 00:20:25.418 ************************************ 00:20:25.418 04:23:36 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:20:25.418 04:23:36 -- target/dif.sh@100 -- # local NULL_DIF 00:20:25.418 04:23:36 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:25.418 04:23:36 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:25.418 04:23:36 -- target/dif.sh@103 -- # bs=128k 00:20:25.418 04:23:36 -- target/dif.sh@103 -- # numjobs=3 00:20:25.418 04:23:36 -- target/dif.sh@103 -- # iodepth=3 00:20:25.418 04:23:36 -- target/dif.sh@103 -- # runtime=5 00:20:25.418 04:23:36 -- target/dif.sh@105 -- # create_subsystems 0 00:20:25.418 04:23:36 -- target/dif.sh@28 -- # local sub 00:20:25.418 04:23:36 -- target/dif.sh@30 -- # for sub in "$@" 00:20:25.418 04:23:36 -- target/dif.sh@31 -- # create_subsystem 0 00:20:25.418 04:23:36 -- target/dif.sh@18 -- # local sub_id=0 00:20:25.418 04:23:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:25.418 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 bdev_null0 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 04:23:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:25.418 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 04:23:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:25.418 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 04:23:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:25.418 04:23:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.418 04:23:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.418 [2024-12-06 04:23:36.147232] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.418 04:23:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.418 04:23:36 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:25.418 04:23:36 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:25.418 04:23:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:25.418 04:23:36 -- nvmf/common.sh@520 -- # config=() 00:20:25.418 04:23:36 -- nvmf/common.sh@520 -- # local subsystem config 00:20:25.418 04:23:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:25.418 04:23:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:25.418 { 00:20:25.418 "params": { 00:20:25.418 "name": "Nvme$subsystem", 00:20:25.418 "trtype": "$TEST_TRANSPORT", 00:20:25.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.418 "adrfam": "ipv4", 00:20:25.418 "trsvcid": "$NVMF_PORT", 00:20:25.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.418 "hdgst": ${hdgst:-false}, 00:20:25.418 "ddgst": ${ddgst:-false} 00:20:25.418 }, 00:20:25.418 "method": "bdev_nvme_attach_controller" 00:20:25.418 } 00:20:25.418 EOF 00:20:25.418 )") 00:20:25.418 04:23:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.418 04:23:36 -- target/dif.sh@82 -- # gen_fio_conf 00:20:25.418 04:23:36 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.418 04:23:36 -- target/dif.sh@54 -- # local file 00:20:25.418 04:23:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:25.418 04:23:36 -- target/dif.sh@56 -- # cat 00:20:25.418 04:23:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:25.418 04:23:36 -- nvmf/common.sh@542 -- # cat 00:20:25.418 04:23:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:25.418 04:23:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.418 04:23:36 -- common/autotest_common.sh@1330 -- # shift 00:20:25.418 04:23:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:25.418 04:23:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.418 04:23:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:25.418 04:23:36 -- target/dif.sh@72 -- # (( file <= files )) 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:25.418 04:23:36 -- nvmf/common.sh@544 -- # jq . 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:25.418 04:23:36 -- nvmf/common.sh@545 -- # IFS=, 00:20:25.418 04:23:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:25.418 "params": { 00:20:25.418 "name": "Nvme0", 00:20:25.418 "trtype": "tcp", 00:20:25.418 "traddr": "10.0.0.2", 00:20:25.418 "adrfam": "ipv4", 00:20:25.418 "trsvcid": "4420", 00:20:25.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:25.418 "hdgst": false, 00:20:25.418 "ddgst": false 00:20:25.418 }, 00:20:25.418 "method": "bdev_nvme_attach_controller" 00:20:25.418 }' 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:25.418 04:23:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:25.418 04:23:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:25.418 04:23:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:25.418 04:23:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:25.418 04:23:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:25.418 04:23:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:25.418 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:25.418 ... 00:20:25.418 fio-3.35 00:20:25.418 Starting 3 threads 00:20:25.418 [2024-12-06 04:23:36.727031] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:25.418 [2024-12-06 04:23:36.727104] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:29.612 00:20:29.612 filename0: (groupid=0, jobs=1): err= 0: pid=87302: Fri Dec 6 04:23:41 2024 00:20:29.612 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5005msec) 00:20:29.612 slat (nsec): min=5685, max=53847, avg=10409.57, stdev=4725.91 00:20:29.612 clat (usec): min=6298, max=15446, avg=10924.85, stdev=521.06 00:20:29.612 lat (usec): min=6305, max=15461, avg=10935.26, stdev=521.39 00:20:29.612 clat percentiles (usec): 00:20:29.612 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10421], 20.00th=[10552], 00:20:29.612 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:29.612 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11600], 00:20:29.612 | 99.00th=[11994], 99.50th=[11994], 99.90th=[15401], 99.95th=[15401], 00:20:29.612 | 99.99th=[15401] 00:20:29.612 bw ( KiB/s): min=34560, max=36096, per=33.36%, avg=35072.00, stdev=665.11, samples=9 00:20:29.612 iops : min= 270, max= 282, avg=274.00, stdev= 5.20, samples=9 00:20:29.612 lat (msec) : 10=1.24%, 20=98.76% 00:20:29.612 cpu : usr=90.97%, sys=7.83%, ctx=99, majf=0, minf=0 00:20:29.612 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.612 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.612 filename0: (groupid=0, jobs=1): err= 0: pid=87303: Fri Dec 6 04:23:41 2024 00:20:29.612 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5004msec) 00:20:29.612 slat (nsec): min=6678, max=44406, avg=10036.56, stdev=4295.78 00:20:29.612 clat (usec): min=9431, max=12115, avg=10923.24, stdev=427.59 00:20:29.612 lat (usec): min=9438, max=12129, avg=10933.28, stdev=427.69 00:20:29.612 clat percentiles (usec): 00:20:29.612 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10421], 20.00th=[10552], 00:20:29.612 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:29.612 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11600], 00:20:29.612 | 99.00th=[11863], 99.50th=[11863], 99.90th=[12125], 99.95th=[12125], 00:20:29.612 | 99.99th=[12125] 00:20:29.612 bw ( KiB/s): min=34560, max=36096, per=33.36%, avg=35072.00, stdev=543.06, samples=9 00:20:29.612 iops : min= 270, max= 282, avg=274.00, stdev= 4.24, samples=9 00:20:29.612 lat (msec) : 10=0.88%, 20=99.12% 00:20:29.612 cpu : usr=91.68%, sys=7.74%, ctx=43, majf=0, minf=0 00:20:29.612 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.612 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.612 filename0: (groupid=0, jobs=1): err= 0: pid=87304: Fri Dec 6 04:23:41 2024 00:20:29.612 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5008msec) 00:20:29.612 slat (nsec): min=6638, max=41669, avg=9645.57, stdev=3845.94 00:20:29.612 clat (usec): min=9598, max=14513, avg=10932.99, stdev=457.42 00:20:29.612 lat (usec): min=9605, max=14553, avg=10942.64, stdev=458.01 00:20:29.612 clat percentiles (usec): 00:20:29.612 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10421], 20.00th=[10552], 00:20:29.612 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:29.612 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11469], 00:20:29.612 | 99.00th=[11994], 99.50th=[12125], 99.90th=[14484], 99.95th=[14484], 00:20:29.612 | 99.99th=[14484] 00:20:29.612 bw ( KiB/s): min=34491, max=36096, per=33.31%, avg=35013.90, stdev=653.43, samples=10 00:20:29.612 iops : min= 269, max= 282, avg=273.50, stdev= 5.15, samples=10 00:20:29.612 lat (msec) : 10=1.09%, 20=98.91% 00:20:29.612 cpu : usr=91.51%, sys=7.97%, ctx=6, majf=0, minf=0 00:20:29.612 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.612 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.612 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:29.612 00:20:29.612 Run status group 0 (all jobs): 00:20:29.612 READ: bw=103MiB/s (108MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=514MiB (539MB), run=5004-5008msec 00:20:29.612 04:23:42 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:29.612 04:23:42 -- target/dif.sh@43 -- # local sub 00:20:29.612 04:23:42 -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.612 04:23:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:29.612 04:23:42 -- target/dif.sh@36 -- # local sub_id=0 00:20:29.612 04:23:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # bs=4k 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # numjobs=8 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # iodepth=16 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # runtime= 00:20:29.612 04:23:42 -- target/dif.sh@109 -- # files=2 00:20:29.612 04:23:42 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:29.612 04:23:42 -- target/dif.sh@28 -- # local sub 00:20:29.612 04:23:42 -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.612 04:23:42 -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.612 04:23:42 -- target/dif.sh@18 -- # local sub_id=0 00:20:29.612 04:23:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 bdev_null0 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 [2024-12-06 04:23:42.122020] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.612 04:23:42 -- target/dif.sh@31 -- # create_subsystem 1 00:20:29.612 04:23:42 -- target/dif.sh@18 -- # local sub_id=1 00:20:29.612 04:23:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 bdev_null1 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.612 04:23:42 -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.612 04:23:42 -- target/dif.sh@31 -- # create_subsystem 2 00:20:29.612 04:23:42 -- target/dif.sh@18 -- # local sub_id=2 00:20:29.612 04:23:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:29.612 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.612 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.872 bdev_null2 00:20:29.873 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.873 04:23:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:29.873 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.873 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.873 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.873 04:23:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:29.873 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.873 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.873 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.873 04:23:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:29.873 04:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.873 04:23:42 -- common/autotest_common.sh@10 -- # set +x 00:20:29.873 04:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.873 04:23:42 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:29.873 04:23:42 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:29.873 04:23:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:29.873 04:23:42 -- nvmf/common.sh@520 -- # config=() 00:20:29.873 04:23:42 -- nvmf/common.sh@520 -- # local subsystem config 00:20:29.873 04:23:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:29.873 { 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme$subsystem", 00:20:29.873 "trtype": "$TEST_TRANSPORT", 00:20:29.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "$NVMF_PORT", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.873 "hdgst": ${hdgst:-false}, 00:20:29.873 "ddgst": ${ddgst:-false} 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 } 00:20:29.873 EOF 00:20:29.873 )") 00:20:29.873 04:23:42 -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.873 04:23:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.873 04:23:42 -- target/dif.sh@54 -- # local file 00:20:29.873 04:23:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.873 04:23:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:29.873 04:23:42 -- target/dif.sh@56 -- # cat 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # cat 00:20:29.873 04:23:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.873 04:23:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:29.873 04:23:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.873 04:23:42 -- common/autotest_common.sh@1330 -- # shift 00:20:29.873 04:23:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:29.873 04:23:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.873 04:23:42 -- target/dif.sh@73 -- # cat 00:20:29.873 04:23:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:29.873 { 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme$subsystem", 00:20:29.873 "trtype": "$TEST_TRANSPORT", 00:20:29.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "$NVMF_PORT", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.873 "hdgst": ${hdgst:-false}, 00:20:29.873 "ddgst": ${ddgst:-false} 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 } 00:20:29.873 EOF 00:20:29.873 )") 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # cat 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file++ )) 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.873 04:23:42 -- target/dif.sh@73 -- # cat 00:20:29.873 04:23:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:29.873 { 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme$subsystem", 00:20:29.873 "trtype": "$TEST_TRANSPORT", 00:20:29.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "$NVMF_PORT", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.873 "hdgst": ${hdgst:-false}, 00:20:29.873 "ddgst": ${ddgst:-false} 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 } 00:20:29.873 EOF 00:20:29.873 )") 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file++ )) 00:20:29.873 04:23:42 -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.873 04:23:42 -- nvmf/common.sh@542 -- # cat 00:20:29.873 04:23:42 -- nvmf/common.sh@544 -- # jq . 00:20:29.873 04:23:42 -- nvmf/common.sh@545 -- # IFS=, 00:20:29.873 04:23:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme0", 00:20:29.873 "trtype": "tcp", 00:20:29.873 "traddr": "10.0.0.2", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "4420", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.873 "hdgst": false, 00:20:29.873 "ddgst": false 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 },{ 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme1", 00:20:29.873 "trtype": "tcp", 00:20:29.873 "traddr": "10.0.0.2", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "4420", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.873 "hdgst": false, 00:20:29.873 "ddgst": false 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 },{ 00:20:29.873 "params": { 00:20:29.873 "name": "Nvme2", 00:20:29.873 "trtype": "tcp", 00:20:29.873 "traddr": "10.0.0.2", 00:20:29.873 "adrfam": "ipv4", 00:20:29.873 "trsvcid": "4420", 00:20:29.873 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.873 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.873 "hdgst": false, 00:20:29.873 "ddgst": false 00:20:29.873 }, 00:20:29.873 "method": "bdev_nvme_attach_controller" 00:20:29.873 }' 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:29.873 04:23:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:29.873 04:23:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:29.873 04:23:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:29.873 04:23:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:29.873 04:23:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.873 04:23:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.873 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.873 ... 00:20:29.873 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.873 ... 00:20:29.873 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:29.873 ... 00:20:29.873 fio-3.35 00:20:29.873 Starting 24 threads 00:20:30.442 [2024-12-06 04:23:42.958762] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:30.442 [2024-12-06 04:23:42.958828] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:42.648 00:20:42.648 filename0: (groupid=0, jobs=1): err= 0: pid=87403: Fri Dec 6 04:23:54 2024 00:20:42.648 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10009msec) 00:20:42.648 slat (usec): min=3, max=8105, avg=40.94, stdev=397.26 00:20:42.648 clat (msec): min=14, max=131, avg=61.37, stdev=19.33 00:20:42.648 lat (msec): min=14, max=131, avg=61.41, stdev=19.33 00:20:42.648 clat percentiles (msec): 00:20:42.648 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 45], 00:20:42.648 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 67], 00:20:42.648 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 95], 00:20:42.648 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 132], 00:20:42.648 | 99.99th=[ 132] 00:20:42.648 bw ( KiB/s): min= 720, max= 1432, per=3.85%, avg=1009.89, stdev=187.17, samples=19 00:20:42.648 iops : min= 180, max= 358, avg=252.42, stdev=46.74, samples=19 00:20:42.648 lat (msec) : 20=0.38%, 50=33.03%, 100=64.55%, 250=2.04% 00:20:42.648 cpu : usr=34.40%, sys=1.29%, ctx=1013, majf=0, minf=9 00:20:42.648 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:42.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.648 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.648 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.648 filename0: (groupid=0, jobs=1): err= 0: pid=87404: Fri Dec 6 04:23:54 2024 00:20:42.648 read: IOPS=241, BW=965KiB/s (988kB/s)(9672KiB/10025msec) 00:20:42.648 slat (usec): min=4, max=3767, avg=20.02, stdev=104.91 00:20:42.648 clat (msec): min=26, max=132, avg=66.19, stdev=18.19 00:20:42.648 lat (msec): min=26, max=132, avg=66.21, stdev=18.19 00:20:42.648 clat percentiles (msec): 00:20:42.648 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 48], 00:20:42.648 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:20:42.648 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 91], 95.00th=[ 97], 00:20:42.648 | 99.00th=[ 110], 99.50th=[ 110], 99.90th=[ 129], 99.95th=[ 132], 00:20:42.648 | 99.99th=[ 132] 00:20:42.648 bw ( KiB/s): min= 704, max= 1180, per=3.60%, avg=943.84, stdev=127.59, samples=19 00:20:42.648 iops : min= 176, max= 295, avg=235.95, stdev=31.92, samples=19 00:20:42.648 lat (msec) : 50=24.11%, 100=72.46%, 250=3.43% 00:20:42.648 cpu : usr=35.16%, sys=1.42%, ctx=1033, majf=0, minf=9 00:20:42.648 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=78.8%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:42.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=89.0%, 8=10.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87405: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=245, BW=982KiB/s (1006kB/s)(9844KiB/10022msec) 00:20:42.649 slat (usec): min=3, max=7021, avg=20.85, stdev=154.14 00:20:42.649 clat (msec): min=27, max=132, avg=65.05, stdev=19.29 00:20:42.649 lat (msec): min=27, max=132, avg=65.07, stdev=19.29 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 47], 00:20:42.649 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:20:42.649 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 96], 00:20:42.649 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 131], 99.95th=[ 132], 00:20:42.649 | 99.99th=[ 132] 00:20:42.649 bw ( KiB/s): min= 734, max= 1236, per=3.64%, avg=954.53, stdev=160.60, samples=19 00:20:42.649 iops : min= 183, max= 309, avg=238.58, stdev=40.18, samples=19 00:20:42.649 lat (msec) : 50=26.37%, 100=69.89%, 250=3.74% 00:20:42.649 cpu : usr=39.57%, sys=1.36%, ctx=1192, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=0.8%, 4=3.5%, 8=79.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87406: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=255, BW=1022KiB/s (1047kB/s)(9.99MiB/10007msec) 00:20:42.649 slat (usec): min=6, max=11082, avg=33.16, stdev=314.67 00:20:42.649 clat (msec): min=8, max=159, avg=62.47, stdev=21.70 00:20:42.649 lat (msec): min=8, max=159, avg=62.51, stdev=21.70 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 45], 00:20:42.649 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 68], 00:20:42.649 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 96], 00:20:42.649 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 138], 99.95th=[ 161], 00:20:42.649 | 99.99th=[ 161] 00:20:42.649 bw ( KiB/s): min= 640, max= 1488, per=3.77%, avg=987.11, stdev=229.73, samples=19 00:20:42.649 iops : min= 160, max= 372, avg=246.74, stdev=57.42, samples=19 00:20:42.649 lat (msec) : 10=0.27%, 20=0.59%, 50=33.63%, 100=60.70%, 250=4.81% 00:20:42.649 cpu : usr=34.71%, sys=1.20%, ctx=1013, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87407: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=238, BW=954KiB/s (977kB/s)(9560KiB/10021msec) 00:20:42.649 slat (usec): min=4, max=8025, avg=34.34, stdev=325.47 00:20:42.649 clat (msec): min=14, max=135, avg=66.89, stdev=21.92 00:20:42.649 lat (msec): min=15, max=135, avg=66.93, stdev=21.94 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:20:42.649 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:20:42.649 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 110], 00:20:42.649 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 136], 99.95th=[ 136], 00:20:42.649 | 99.99th=[ 136] 00:20:42.649 bw ( KiB/s): min= 544, max= 1568, per=3.63%, avg=951.15, stdev=248.13, samples=20 00:20:42.649 iops : min= 136, max= 392, avg=237.75, stdev=61.97, samples=20 00:20:42.649 lat (msec) : 20=0.13%, 50=26.32%, 100=65.27%, 250=8.28% 00:20:42.649 cpu : usr=35.29%, sys=1.37%, ctx=1037, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=1.8%, 4=7.4%, 8=75.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87408: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.77MiB/10019msec) 00:20:42.649 slat (usec): min=6, max=8038, avg=30.62, stdev=238.28 00:20:42.649 clat (msec): min=15, max=134, avg=63.93, stdev=19.87 00:20:42.649 lat (msec): min=18, max=134, avg=63.96, stdev=19.87 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 46], 00:20:42.649 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 71], 00:20:42.649 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 94], 00:20:42.649 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 134], 00:20:42.649 | 99.99th=[ 134] 00:20:42.649 bw ( KiB/s): min= 640, max= 1512, per=3.79%, avg=993.55, stdev=227.63, samples=20 00:20:42.649 iops : min= 160, max= 378, avg=248.35, stdev=56.85, samples=20 00:20:42.649 lat (msec) : 20=0.28%, 50=29.74%, 100=66.35%, 250=3.64% 00:20:42.649 cpu : usr=38.02%, sys=1.60%, ctx=1055, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=76.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=89.2%, 8=9.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87409: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=237, BW=950KiB/s (973kB/s)(9512KiB/10015msec) 00:20:42.649 slat (usec): min=4, max=8062, avg=29.38, stdev=255.85 00:20:42.649 clat (msec): min=17, max=135, avg=67.20, stdev=22.26 00:20:42.649 lat (msec): min=17, max=135, avg=67.23, stdev=22.25 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 47], 00:20:42.649 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 71], 00:20:42.649 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 109], 00:20:42.649 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 136], 00:20:42.649 | 99.99th=[ 136] 00:20:42.649 bw ( KiB/s): min= 624, max= 1584, per=3.61%, avg=946.80, stdev=252.37, samples=20 00:20:42.649 iops : min= 156, max= 396, avg=236.70, stdev=63.09, samples=20 00:20:42.649 lat (msec) : 20=0.50%, 50=25.82%, 100=64.38%, 250=9.29% 00:20:42.649 cpu : usr=39.18%, sys=1.48%, ctx=1463, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=2.5%, 4=10.2%, 8=72.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename0: (groupid=0, jobs=1): err= 0: pid=87410: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=256, BW=1025KiB/s (1049kB/s)(10.0MiB/10001msec) 00:20:42.649 slat (usec): min=3, max=8105, avg=40.58, stdev=419.01 00:20:42.649 clat (usec): min=1334, max=151969, avg=62253.06, stdev=24123.41 00:20:42.649 lat (usec): min=1342, max=151979, avg=62293.64, stdev=24122.54 00:20:42.649 clat percentiles (usec): 00:20:42.649 | 1.00th=[ 1713], 5.00th=[ 27657], 10.00th=[ 35914], 20.00th=[ 42730], 00:20:42.649 | 30.00th=[ 47973], 40.00th=[ 55313], 50.00th=[ 60031], 60.00th=[ 67634], 00:20:42.649 | 70.00th=[ 71828], 80.00th=[ 82314], 90.00th=[ 93848], 95.00th=[106431], 00:20:42.649 | 99.00th=[120062], 99.50th=[121111], 99.90th=[137364], 99.95th=[152044], 00:20:42.649 | 99.99th=[152044] 00:20:42.649 bw ( KiB/s): min= 640, max= 1376, per=3.69%, avg=967.05, stdev=226.08, samples=19 00:20:42.649 iops : min= 160, max= 344, avg=241.74, stdev=56.48, samples=19 00:20:42.649 lat (msec) : 2=1.48%, 4=0.86%, 10=0.23%, 20=0.62%, 50=31.85% 00:20:42.649 lat (msec) : 100=56.44%, 250=8.51% 00:20:42.649 cpu : usr=34.32%, sys=1.32%, ctx=1006, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename1: (groupid=0, jobs=1): err= 0: pid=87411: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=255, BW=1024KiB/s (1048kB/s)(10.0MiB/10049msec) 00:20:42.649 slat (usec): min=4, max=8033, avg=35.82, stdev=332.79 00:20:42.649 clat (msec): min=8, max=117, avg=62.29, stdev=18.72 00:20:42.649 lat (msec): min=8, max=121, avg=62.33, stdev=18.74 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 47], 00:20:42.649 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 69], 00:20:42.649 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 95], 00:20:42.649 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 00:20:42.649 | 99.99th=[ 117] 00:20:42.649 bw ( KiB/s): min= 816, max= 1216, per=3.82%, avg=1001.79, stdev=111.99, samples=19 00:20:42.649 iops : min= 204, max= 304, avg=250.42, stdev=27.96, samples=19 00:20:42.649 lat (msec) : 10=0.08%, 20=1.48%, 50=27.60%, 100=68.62%, 250=2.22% 00:20:42.649 cpu : usr=39.98%, sys=1.53%, ctx=1142, majf=0, minf=9 00:20:42.649 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:42.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.649 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.649 filename1: (groupid=0, jobs=1): err= 0: pid=87412: Fri Dec 6 04:23:54 2024 00:20:42.649 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.80MiB/10019msec) 00:20:42.649 slat (usec): min=4, max=4025, avg=21.53, stdev=113.63 00:20:42.649 clat (msec): min=24, max=129, avg=63.74, stdev=19.22 00:20:42.649 lat (msec): min=24, max=129, avg=63.76, stdev=19.21 00:20:42.649 clat percentiles (msec): 00:20:42.649 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 46], 00:20:42.649 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:20:42.649 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 88], 95.00th=[ 96], 00:20:42.650 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 128], 99.95th=[ 128], 00:20:42.650 | 99.99th=[ 130] 00:20:42.650 bw ( KiB/s): min= 624, max= 1340, per=3.70%, avg=971.58, stdev=169.02, samples=19 00:20:42.650 iops : min= 156, max= 335, avg=242.89, stdev=42.25, samples=19 00:20:42.650 lat (msec) : 50=28.57%, 100=68.69%, 250=2.75% 00:20:42.650 cpu : usr=40.09%, sys=1.82%, ctx=1319, majf=0, minf=9 00:20:42.650 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=81.1%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=88.1%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=2510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87413: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=357, BW=1432KiB/s (1466kB/s)(14.0MiB/10048msec) 00:20:42.650 slat (usec): min=6, max=5020, avg=21.38, stdev=165.99 00:20:42.650 clat (usec): min=844, max=126936, avg=44517.47, stdev=30310.67 00:20:42.650 lat (usec): min=852, max=126951, avg=44538.85, stdev=30310.20 00:20:42.650 clat percentiles (usec): 00:20:42.650 | 1.00th=[ 1598], 5.00th=[ 5342], 10.00th=[ 9634], 20.00th=[ 12256], 00:20:42.650 | 30.00th=[ 16319], 40.00th=[ 22938], 50.00th=[ 46400], 60.00th=[ 58459], 00:20:42.650 | 70.00th=[ 66323], 80.00th=[ 72877], 90.00th=[ 85459], 95.00th=[ 91751], 00:20:42.650 | 99.00th=[105382], 99.50th=[113771], 99.90th=[127402], 99.95th=[127402], 00:20:42.650 | 99.99th=[127402] 00:20:42.650 bw ( KiB/s): min= 784, max= 5698, per=5.46%, avg=1432.10, stdev=1307.33, samples=20 00:20:42.650 iops : min= 196, max= 1424, avg=358.00, stdev=326.75, samples=20 00:20:42.650 lat (usec) : 1000=0.06% 00:20:42.650 lat (msec) : 2=0.95%, 4=2.78%, 10=7.87%, 20=24.61%, 50=16.32% 00:20:42.650 lat (msec) : 100=45.75%, 250=1.67% 00:20:42.650 cpu : usr=41.48%, sys=1.98%, ctx=1658, majf=0, minf=0 00:20:42.650 IO depths : 1=1.0%, 2=3.9%, 4=11.7%, 8=69.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=90.8%, 8=6.6%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=3596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87414: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=259, BW=1037KiB/s (1062kB/s)(10.1MiB/10007msec) 00:20:42.650 slat (usec): min=3, max=8063, avg=32.35, stdev=286.33 00:20:42.650 clat (msec): min=8, max=159, avg=61.56, stdev=19.44 00:20:42.650 lat (msec): min=8, max=159, avg=61.59, stdev=19.44 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 45], 00:20:42.650 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 67], 00:20:42.650 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 94], 00:20:42.650 | 99.00th=[ 105], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 159], 00:20:42.650 | 99.99th=[ 161] 00:20:42.650 bw ( KiB/s): min= 720, max= 1504, per=3.84%, avg=1007.00, stdev=192.93, samples=19 00:20:42.650 iops : min= 180, max= 376, avg=251.68, stdev=48.16, samples=19 00:20:42.650 lat (msec) : 10=0.23%, 20=0.50%, 50=32.45%, 100=64.62%, 250=2.20% 00:20:42.650 cpu : usr=41.46%, sys=1.46%, ctx=1125, majf=0, minf=9 00:20:42.650 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87415: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=256, BW=1026KiB/s (1051kB/s)(10.0MiB/10009msec) 00:20:42.650 slat (usec): min=4, max=8060, avg=26.61, stdev=223.20 00:20:42.650 clat (msec): min=9, max=163, avg=62.24, stdev=19.68 00:20:42.650 lat (msec): min=9, max=163, avg=62.26, stdev=19.68 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 46], 00:20:42.650 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 68], 00:20:42.650 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 94], 00:20:42.650 | 99.00th=[ 118], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 163], 00:20:42.650 | 99.99th=[ 163] 00:20:42.650 bw ( KiB/s): min= 752, max= 1424, per=3.79%, avg=994.74, stdev=184.96, samples=19 00:20:42.650 iops : min= 188, max= 356, avg=248.63, stdev=46.17, samples=19 00:20:42.650 lat (msec) : 10=0.12%, 20=0.35%, 50=32.01%, 100=65.19%, 250=2.34% 00:20:42.650 cpu : usr=35.22%, sys=1.40%, ctx=1024, majf=0, minf=9 00:20:42.650 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87416: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=253, BW=1015KiB/s (1040kB/s)(9.93MiB/10012msec) 00:20:42.650 slat (usec): min=3, max=8038, avg=31.41, stdev=297.55 00:20:42.650 clat (msec): min=16, max=168, avg=62.89, stdev=19.18 00:20:42.650 lat (msec): min=16, max=168, avg=62.92, stdev=19.19 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 46], 00:20:42.650 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 67], 00:20:42.650 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 96], 00:20:42.650 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 169], 00:20:42.650 | 99.99th=[ 169] 00:20:42.650 bw ( KiB/s): min= 736, max= 1440, per=3.85%, avg=1010.10, stdev=186.75, samples=20 00:20:42.650 iops : min= 184, max= 360, avg=252.50, stdev=46.67, samples=20 00:20:42.650 lat (msec) : 20=0.12%, 50=29.87%, 100=67.61%, 250=2.40% 00:20:42.650 cpu : usr=43.50%, sys=1.82%, ctx=1302, majf=0, minf=9 00:20:42.650 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87417: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=348, BW=1393KiB/s (1426kB/s)(13.7MiB/10053msec) 00:20:42.650 slat (usec): min=4, max=8025, avg=22.81, stdev=196.68 00:20:42.650 clat (usec): min=777, max=117814, avg=45805.51, stdev=29035.13 00:20:42.650 lat (usec): min=785, max=117825, avg=45828.32, stdev=29045.55 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:20:42.650 | 30.00th=[ 18], 40.00th=[ 25], 50.00th=[ 51], 60.00th=[ 61], 00:20:42.650 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 92], 00:20:42.650 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 118], 99.95th=[ 118], 00:20:42.650 | 99.99th=[ 118] 00:20:42.650 bw ( KiB/s): min= 760, max= 4918, per=5.32%, avg=1394.30, stdev=1158.73, samples=20 00:20:42.650 iops : min= 190, max= 1229, avg=348.55, stdev=289.60, samples=20 00:20:42.650 lat (usec) : 1000=0.06% 00:20:42.650 lat (msec) : 2=0.14%, 4=1.86%, 10=1.83%, 20=26.69%, 50=19.63% 00:20:42.650 lat (msec) : 100=48.29%, 250=1.51% 00:20:42.650 cpu : usr=42.66%, sys=2.00%, ctx=1239, majf=0, minf=9 00:20:42.650 IO depths : 1=1.0%, 2=3.7%, 4=11.5%, 8=69.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=3500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename1: (groupid=0, jobs=1): err= 0: pid=87418: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.95MiB/10017msec) 00:20:42.650 slat (usec): min=6, max=8044, avg=29.74, stdev=254.31 00:20:42.650 clat (msec): min=17, max=122, avg=62.75, stdev=19.11 00:20:42.650 lat (msec): min=17, max=122, avg=62.78, stdev=19.11 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:20:42.650 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 69], 00:20:42.650 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 95], 00:20:42.650 | 99.00th=[ 111], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:20:42.650 | 99.99th=[ 123] 00:20:42.650 bw ( KiB/s): min= 640, max= 1624, per=3.87%, avg=1014.35, stdev=215.67, samples=20 00:20:42.650 iops : min= 160, max= 406, avg=253.55, stdev=53.86, samples=20 00:20:42.650 lat (msec) : 20=0.39%, 50=31.24%, 100=65.93%, 250=2.43% 00:20:42.650 cpu : usr=35.60%, sys=1.40%, ctx=1102, majf=0, minf=9 00:20:42.650 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:42.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 complete : 0=0.0%, 4=88.3%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.650 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.650 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.650 filename2: (groupid=0, jobs=1): err= 0: pid=87419: Fri Dec 6 04:23:54 2024 00:20:42.650 read: IOPS=341, BW=1365KiB/s (1398kB/s)(13.4MiB/10043msec) 00:20:42.650 slat (usec): min=3, max=8034, avg=25.39, stdev=210.27 00:20:42.650 clat (usec): min=956, max=128307, avg=46709.52, stdev=29476.44 00:20:42.650 lat (usec): min=964, max=128328, avg=46734.91, stdev=29490.58 00:20:42.650 clat percentiles (msec): 00:20:42.650 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:20:42.650 | 30.00th=[ 22], 40.00th=[ 32], 50.00th=[ 49], 60.00th=[ 61], 00:20:42.650 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 94], 00:20:42.650 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:20:42.650 | 99.99th=[ 129] 00:20:42.650 bw ( KiB/s): min= 656, max= 4472, per=5.20%, avg=1364.80, stdev=1097.22, samples=20 00:20:42.650 iops : min= 164, max= 1118, avg=341.20, stdev=274.30, samples=20 00:20:42.650 lat (usec) : 1000=0.09% 00:20:42.650 lat (msec) : 2=0.03%, 4=0.06%, 10=2.16%, 20=26.15%, 50=22.26% 00:20:42.650 lat (msec) : 100=47.24%, 250=2.01% 00:20:42.651 cpu : usr=39.15%, sys=1.50%, ctx=1238, majf=0, minf=9 00:20:42.651 IO depths : 1=1.0%, 2=3.8%, 4=12.0%, 8=69.6%, 16=13.7%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=90.7%, 8=6.6%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=3427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87420: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=328, BW=1312KiB/s (1344kB/s)(12.9MiB/10045msec) 00:20:42.651 slat (usec): min=3, max=8047, avg=21.42, stdev=241.92 00:20:42.651 clat (usec): min=1184, max=132838, avg=48565.74, stdev=34291.86 00:20:42.651 lat (usec): min=1193, max=132865, avg=48587.16, stdev=34289.89 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:20:42.651 | 30.00th=[ 15], 40.00th=[ 24], 50.00th=[ 49], 60.00th=[ 64], 00:20:42.651 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 107], 00:20:42.651 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:20:42.651 | 99.99th=[ 133] 00:20:42.651 bw ( KiB/s): min= 640, max= 4694, per=5.01%, avg=1313.50, stdev=1209.20, samples=20 00:20:42.651 iops : min= 160, max= 1173, avg=328.35, stdev=302.23, samples=20 00:20:42.651 lat (msec) : 2=0.06%, 4=1.58%, 10=4.49%, 20=28.38%, 50=16.24% 00:20:42.651 lat (msec) : 100=42.85%, 250=6.40% 00:20:42.651 cpu : usr=37.80%, sys=1.51%, ctx=1057, majf=0, minf=9 00:20:42.651 IO depths : 1=1.1%, 2=5.7%, 4=19.2%, 8=61.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=92.7%, 8=2.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=3295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87421: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=248, BW=993KiB/s (1017kB/s)(9960KiB/10026msec) 00:20:42.651 slat (usec): min=3, max=6471, avg=24.50, stdev=172.31 00:20:42.651 clat (msec): min=24, max=127, avg=64.23, stdev=19.81 00:20:42.651 lat (msec): min=24, max=127, avg=64.26, stdev=19.80 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 46], 00:20:42.651 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 70], 00:20:42.651 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 97], 00:20:42.651 | 99.00th=[ 115], 99.50th=[ 116], 99.90th=[ 126], 99.95th=[ 128], 00:20:42.651 | 99.99th=[ 128] 00:20:42.651 bw ( KiB/s): min= 640, max= 1325, per=3.69%, avg=967.00, stdev=184.66, samples=19 00:20:42.651 iops : min= 160, max= 331, avg=241.68, stdev=46.18, samples=19 00:20:42.651 lat (msec) : 50=28.84%, 100=66.95%, 250=4.22% 00:20:42.651 cpu : usr=43.86%, sys=1.70%, ctx=1284, majf=0, minf=9 00:20:42.651 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=76.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=2490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87422: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=257, BW=1031KiB/s (1055kB/s)(10.1MiB/10040msec) 00:20:42.651 slat (usec): min=4, max=8034, avg=28.32, stdev=229.90 00:20:42.651 clat (msec): min=11, max=133, avg=61.89, stdev=19.11 00:20:42.651 lat (msec): min=11, max=133, avg=61.92, stdev=19.10 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 45], 00:20:42.651 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:20:42.651 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 94], 00:20:42.651 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 129], 99.95th=[ 129], 00:20:42.651 | 99.99th=[ 134] 00:20:42.651 bw ( KiB/s): min= 672, max= 1206, per=3.85%, avg=1008.16, stdev=131.15, samples=19 00:20:42.651 iops : min= 168, max= 301, avg=252.00, stdev=32.75, samples=19 00:20:42.651 lat (msec) : 20=0.54%, 50=30.11%, 100=67.26%, 250=2.09% 00:20:42.651 cpu : usr=43.19%, sys=1.69%, ctx=1297, majf=0, minf=9 00:20:42.651 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=2587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87423: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=343, BW=1374KiB/s (1407kB/s)(13.5MiB/10044msec) 00:20:42.651 slat (usec): min=6, max=7029, avg=24.15, stdev=204.06 00:20:42.651 clat (usec): min=1605, max=128190, avg=46393.66, stdev=28922.08 00:20:42.651 lat (usec): min=1614, max=128210, avg=46417.81, stdev=28927.81 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:20:42.651 | 30.00th=[ 20], 40.00th=[ 35], 50.00th=[ 49], 60.00th=[ 59], 00:20:42.651 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 86], 95.00th=[ 93], 00:20:42.651 | 99.00th=[ 105], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:20:42.651 | 99.99th=[ 129] 00:20:42.651 bw ( KiB/s): min= 768, max= 4576, per=5.25%, avg=1375.70, stdev=1067.01, samples=20 00:20:42.651 iops : min= 192, max= 1144, avg=343.90, stdev=266.77, samples=20 00:20:42.651 lat (msec) : 2=0.41%, 4=0.17%, 10=3.86%, 20=26.52%, 50=20.35% 00:20:42.651 lat (msec) : 100=47.13%, 250=1.57% 00:20:42.651 cpu : usr=41.42%, sys=1.61%, ctx=1713, majf=0, minf=10 00:20:42.651 IO depths : 1=0.9%, 2=3.4%, 4=10.6%, 8=71.2%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=90.3%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=3450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87424: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=246, BW=986KiB/s (1010kB/s)(9896KiB/10032msec) 00:20:42.651 slat (usec): min=5, max=4028, avg=20.91, stdev=81.24 00:20:42.651 clat (msec): min=22, max=136, avg=64.72, stdev=21.16 00:20:42.651 lat (msec): min=22, max=136, avg=64.74, stdev=21.16 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:20:42.651 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:20:42.651 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 103], 00:20:42.651 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 136], 00:20:42.651 | 99.99th=[ 136] 00:20:42.651 bw ( KiB/s): min= 528, max= 1342, per=3.65%, avg=956.95, stdev=202.86, samples=19 00:20:42.651 iops : min= 132, max= 335, avg=239.21, stdev=50.66, samples=19 00:20:42.651 lat (msec) : 50=30.76%, 100=64.07%, 250=5.17% 00:20:42.651 cpu : usr=38.87%, sys=1.53%, ctx=1088, majf=0, minf=9 00:20:42.651 IO depths : 1=0.1%, 2=1.7%, 4=7.0%, 8=75.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=89.2%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87425: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.81MiB/10023msec) 00:20:42.651 slat (usec): min=4, max=8094, avg=32.87, stdev=330.32 00:20:42.651 clat (msec): min=22, max=131, avg=63.71, stdev=19.12 00:20:42.651 lat (msec): min=22, max=131, avg=63.74, stdev=19.13 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:20:42.651 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:20:42.651 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 95], 00:20:42.651 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 132], 00:20:42.651 | 99.99th=[ 132] 00:20:42.651 bw ( KiB/s): min= 752, max= 1330, per=3.71%, avg=973.58, stdev=164.19, samples=19 00:20:42.651 iops : min= 188, max= 332, avg=243.37, stdev=40.99, samples=19 00:20:42.651 lat (msec) : 50=29.11%, 100=68.06%, 250=2.83% 00:20:42.651 cpu : usr=36.99%, sys=1.45%, ctx=1092, majf=0, minf=9 00:20:42.651 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=2511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 filename2: (groupid=0, jobs=1): err= 0: pid=87426: Fri Dec 6 04:23:54 2024 00:20:42.651 read: IOPS=344, BW=1379KiB/s (1412kB/s)(13.6MiB/10073msec) 00:20:42.651 slat (usec): min=3, max=8039, avg=23.68, stdev=256.26 00:20:42.651 clat (msec): min=2, max=125, avg=46.24, stdev=30.35 00:20:42.651 lat (msec): min=2, max=125, avg=46.26, stdev=30.36 00:20:42.651 clat percentiles (msec): 00:20:42.651 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:20:42.651 | 30.00th=[ 15], 40.00th=[ 24], 50.00th=[ 50], 60.00th=[ 61], 00:20:42.651 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 94], 00:20:42.651 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 125], 99.95th=[ 127], 00:20:42.651 | 99.99th=[ 127] 00:20:42.651 bw ( KiB/s): min= 768, max= 5024, per=5.27%, avg=1382.40, stdev=1216.16, samples=20 00:20:42.651 iops : min= 192, max= 1256, avg=345.60, stdev=304.04, samples=20 00:20:42.651 lat (msec) : 4=0.46%, 10=7.69%, 20=25.60%, 50=16.71%, 100=47.32% 00:20:42.651 lat (msec) : 250=2.22% 00:20:42.651 cpu : usr=33.74%, sys=1.29%, ctx=959, majf=0, minf=9 00:20:42.651 IO depths : 1=1.0%, 2=4.0%, 4=12.5%, 8=68.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:20:42.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 complete : 0=0.0%, 4=91.0%, 8=6.1%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.651 issued rwts: total=3472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.651 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:42.651 00:20:42.651 Run status group 0 (all jobs): 00:20:42.651 READ: bw=25.6MiB/s (26.8MB/s), 950KiB/s-1432KiB/s (973kB/s-1466kB/s), io=258MiB (270MB), run=10001-10073msec 00:20:42.651 04:23:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:42.651 04:23:54 -- target/dif.sh@43 -- # local sub 00:20:42.652 04:23:54 -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.652 04:23:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:42.652 04:23:54 -- target/dif.sh@36 -- # local sub_id=0 00:20:42.652 04:23:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.652 04:23:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:42.652 04:23:54 -- target/dif.sh@36 -- # local sub_id=1 00:20:42.652 04:23:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@45 -- # for sub in "$@" 00:20:42.652 04:23:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:42.652 04:23:54 -- target/dif.sh@36 -- # local sub_id=2 00:20:42.652 04:23:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # numjobs=2 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # iodepth=8 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # runtime=5 00:20:42.652 04:23:54 -- target/dif.sh@115 -- # files=1 00:20:42.652 04:23:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:42.652 04:23:54 -- target/dif.sh@28 -- # local sub 00:20:42.652 04:23:54 -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.652 04:23:54 -- target/dif.sh@31 -- # create_subsystem 0 00:20:42.652 04:23:54 -- target/dif.sh@18 -- # local sub_id=0 00:20:42.652 04:23:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 bdev_null0 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 [2024-12-06 04:23:54.802960] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@30 -- # for sub in "$@" 00:20:42.652 04:23:54 -- target/dif.sh@31 -- # create_subsystem 1 00:20:42.652 04:23:54 -- target/dif.sh@18 -- # local sub_id=1 00:20:42.652 04:23:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 bdev_null1 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.652 04:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.652 04:23:54 -- common/autotest_common.sh@10 -- # set +x 00:20:42.652 04:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.652 04:23:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:42.652 04:23:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:42.652 04:23:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:42.652 04:23:54 -- nvmf/common.sh@520 -- # config=() 00:20:42.652 04:23:54 -- nvmf/common.sh@520 -- # local subsystem config 00:20:42.652 04:23:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:42.652 04:23:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:42.652 { 00:20:42.652 "params": { 00:20:42.652 "name": "Nvme$subsystem", 00:20:42.652 "trtype": "$TEST_TRANSPORT", 00:20:42.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.652 "adrfam": "ipv4", 00:20:42.652 "trsvcid": "$NVMF_PORT", 00:20:42.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.652 "hdgst": ${hdgst:-false}, 00:20:42.652 "ddgst": ${ddgst:-false} 00:20:42.652 }, 00:20:42.652 "method": "bdev_nvme_attach_controller" 00:20:42.652 } 00:20:42.652 EOF 00:20:42.652 )") 00:20:42.652 04:23:54 -- target/dif.sh@82 -- # gen_fio_conf 00:20:42.652 04:23:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.652 04:23:54 -- target/dif.sh@54 -- # local file 00:20:42.652 04:23:54 -- target/dif.sh@56 -- # cat 00:20:42.652 04:23:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.652 04:23:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:42.652 04:23:54 -- nvmf/common.sh@542 -- # cat 00:20:42.652 04:23:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.652 04:23:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:42.652 04:23:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.652 04:23:54 -- common/autotest_common.sh@1330 -- # shift 00:20:42.652 04:23:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:42.652 04:23:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.652 04:23:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:42.652 04:23:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:42.652 { 00:20:42.652 "params": { 00:20:42.652 "name": "Nvme$subsystem", 00:20:42.652 "trtype": "$TEST_TRANSPORT", 00:20:42.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.652 "adrfam": "ipv4", 00:20:42.652 "trsvcid": "$NVMF_PORT", 00:20:42.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.652 "hdgst": ${hdgst:-false}, 00:20:42.652 "ddgst": ${ddgst:-false} 00:20:42.652 }, 00:20:42.652 "method": "bdev_nvme_attach_controller" 00:20:42.652 } 00:20:42.652 EOF 00:20:42.652 )") 00:20:42.652 04:23:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.652 04:23:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:42.652 04:23:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:42.652 04:23:54 -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.652 04:23:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:42.652 04:23:54 -- nvmf/common.sh@542 -- # cat 00:20:42.652 04:23:54 -- target/dif.sh@73 -- # cat 00:20:42.652 04:23:54 -- target/dif.sh@72 -- # (( file++ )) 00:20:42.652 04:23:54 -- target/dif.sh@72 -- # (( file <= files )) 00:20:42.652 04:23:54 -- nvmf/common.sh@544 -- # jq . 00:20:42.652 04:23:54 -- nvmf/common.sh@545 -- # IFS=, 00:20:42.652 04:23:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:42.652 "params": { 00:20:42.652 "name": "Nvme0", 00:20:42.652 "trtype": "tcp", 00:20:42.652 "traddr": "10.0.0.2", 00:20:42.652 "adrfam": "ipv4", 00:20:42.652 "trsvcid": "4420", 00:20:42.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.652 "hdgst": false, 00:20:42.652 "ddgst": false 00:20:42.652 }, 00:20:42.652 "method": "bdev_nvme_attach_controller" 00:20:42.652 },{ 00:20:42.652 "params": { 00:20:42.652 "name": "Nvme1", 00:20:42.652 "trtype": "tcp", 00:20:42.652 "traddr": "10.0.0.2", 00:20:42.652 "adrfam": "ipv4", 00:20:42.652 "trsvcid": "4420", 00:20:42.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.652 "hdgst": false, 00:20:42.652 "ddgst": false 00:20:42.652 }, 00:20:42.652 "method": "bdev_nvme_attach_controller" 00:20:42.652 }' 00:20:42.652 04:23:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:42.652 04:23:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:42.652 04:23:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.653 04:23:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:42.653 04:23:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:42.653 04:23:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:42.653 04:23:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:42.653 04:23:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:42.653 04:23:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:42.653 04:23:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:42.653 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:42.653 ... 00:20:42.653 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:42.653 ... 00:20:42.653 fio-3.35 00:20:42.653 Starting 4 threads 00:20:42.911 [2024-12-06 04:23:55.452313] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:42.911 [2024-12-06 04:23:55.452408] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:48.187 00:20:48.187 filename0: (groupid=0, jobs=1): err= 0: pid=87574: Fri Dec 6 04:24:00 2024 00:20:48.187 read: IOPS=2194, BW=17.1MiB/s (18.0MB/s)(85.8MiB/5003msec) 00:20:48.187 slat (nsec): min=6157, max=86339, avg=16823.81, stdev=10253.44 00:20:48.187 clat (usec): min=378, max=12921, avg=3589.42, stdev=937.74 00:20:48.187 lat (usec): min=390, max=12940, avg=3606.24, stdev=938.96 00:20:48.187 clat percentiles (usec): 00:20:48.187 | 1.00th=[ 1090], 5.00th=[ 1827], 10.00th=[ 2114], 20.00th=[ 3032], 00:20:48.187 | 30.00th=[ 3490], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3884], 00:20:48.187 | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4752], 00:20:48.187 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7635], 99.95th=[ 8160], 00:20:48.187 | 99.99th=[11994] 00:20:48.187 bw ( KiB/s): min=16256, max=18928, per=25.57%, avg=17696.00, stdev=1024.28, samples=9 00:20:48.187 iops : min= 2032, max= 2366, avg=2212.00, stdev=128.04, samples=9 00:20:48.187 lat (usec) : 500=0.05%, 750=0.07%, 1000=0.26% 00:20:48.187 lat (msec) : 2=7.21%, 4=60.07%, 10=32.29%, 20=0.05% 00:20:48.187 cpu : usr=93.46%, sys=5.74%, ctx=9, majf=0, minf=0 00:20:48.187 IO depths : 1=1.6%, 2=13.1%, 4=56.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.187 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.187 issued rwts: total=10977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:48.187 filename0: (groupid=0, jobs=1): err= 0: pid=87575: Fri Dec 6 04:24:00 2024 00:20:48.187 read: IOPS=2078, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:20:48.187 slat (usec): min=4, max=104, avg=19.49, stdev=10.34 00:20:48.187 clat (usec): min=346, max=11806, avg=3776.89, stdev=831.08 00:20:48.187 lat (usec): min=370, max=11830, avg=3796.38, stdev=831.89 00:20:48.187 clat percentiles (usec): 00:20:48.187 | 1.00th=[ 1467], 5.00th=[ 2008], 10.00th=[ 2540], 20.00th=[ 3425], 00:20:48.187 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3851], 60.00th=[ 3982], 00:20:48.187 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4948], 00:20:48.187 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7570], 99.95th=[ 7963], 00:20:48.187 | 99.99th=[10028] 00:20:48.187 bw ( KiB/s): min=15616, max=19824, per=24.32%, avg=16832.00, stdev=1402.90, samples=9 00:20:48.187 iops : min= 1952, max= 2478, avg=2104.00, stdev=175.36, samples=9 00:20:48.187 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:20:48.187 lat (msec) : 2=4.85%, 4=56.02%, 10=39.07%, 20=0.02% 00:20:48.187 cpu : usr=94.20%, sys=4.92%, ctx=12, majf=0, minf=0 00:20:48.187 IO depths : 1=2.8%, 2=16.4%, 4=55.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.187 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.187 issued rwts: total=10395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:48.187 filename1: (groupid=0, jobs=1): err= 0: pid=87576: Fri Dec 6 04:24:00 2024 00:20:48.187 read: IOPS=2290, BW=17.9MiB/s (18.8MB/s)(89.5MiB/5001msec) 00:20:48.187 slat (nsec): min=4687, max=89922, avg=16272.87, stdev=9571.78 00:20:48.187 clat (usec): min=600, max=13083, avg=3436.24, stdev=1193.01 00:20:48.187 lat (usec): min=614, max=13090, avg=3452.52, stdev=1195.39 00:20:48.187 clat percentiles (usec): 00:20:48.187 | 1.00th=[ 938], 5.00th=[ 1074], 10.00th=[ 1156], 20.00th=[ 2802], 00:20:48.187 | 30.00th=[ 3294], 40.00th=[ 3425], 50.00th=[ 3621], 60.00th=[ 3785], 00:20:48.187 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 5014], 00:20:48.187 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 7701], 99.95th=[ 8094], 00:20:48.187 | 99.99th=[11863] 00:20:48.187 bw ( KiB/s): min=15040, max=23648, per=26.98%, avg=18677.33, stdev=3067.44, samples=9 00:20:48.187 iops : min= 1880, max= 2956, avg=2334.67, stdev=383.43, samples=9 00:20:48.187 lat (usec) : 750=0.04%, 1000=1.68% 00:20:48.187 lat (msec) : 2=14.10%, 4=53.31%, 10=30.82%, 20=0.04% 00:20:48.187 cpu : usr=93.08%, sys=5.74%, ctx=112, majf=0, minf=9 00:20:48.188 IO depths : 1=2.0%, 2=10.7%, 4=58.0%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.188 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.188 issued rwts: total=11455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:48.188 filename1: (groupid=0, jobs=1): err= 0: pid=87577: Fri Dec 6 04:24:00 2024 00:20:48.188 read: IOPS=2090, BW=16.3MiB/s (17.1MB/s)(81.7MiB/5002msec) 00:20:48.188 slat (nsec): min=6331, max=93131, avg=19288.16, stdev=10697.61 00:20:48.188 clat (usec): min=813, max=11893, avg=3755.68, stdev=822.33 00:20:48.188 lat (usec): min=821, max=11900, avg=3774.97, stdev=823.64 00:20:48.188 clat percentiles (usec): 00:20:48.188 | 1.00th=[ 1713], 5.00th=[ 2089], 10.00th=[ 2442], 20.00th=[ 3392], 00:20:48.188 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3851], 60.00th=[ 3982], 00:20:48.188 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4883], 00:20:48.188 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7570], 99.95th=[ 8094], 00:20:48.188 | 99.99th=[10028] 00:20:48.188 bw ( KiB/s): min=15808, max=18736, per=24.22%, avg=16766.22, stdev=890.18, samples=9 00:20:48.188 iops : min= 1976, max= 2342, avg=2095.78, stdev=111.27, samples=9 00:20:48.188 lat (usec) : 1000=0.03% 00:20:48.188 lat (msec) : 2=3.80%, 4=57.66%, 10=38.50%, 20=0.02% 00:20:48.188 cpu : usr=94.04%, sys=5.06%, ctx=67, majf=0, minf=0 00:20:48.188 IO depths : 1=2.8%, 2=15.5%, 4=55.6%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.188 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.188 issued rwts: total=10457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.188 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:48.188 00:20:48.188 Run status group 0 (all jobs): 00:20:48.188 READ: bw=67.6MiB/s (70.9MB/s), 16.2MiB/s-17.9MiB/s (17.0MB/s-18.8MB/s), io=338MiB (355MB), run=5001-5003msec 00:20:48.447 04:24:00 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:48.447 04:24:00 -- target/dif.sh@43 -- # local sub 00:20:48.447 04:24:00 -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.447 04:24:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:48.447 04:24:00 -- target/dif.sh@36 -- # local sub_id=0 00:20:48.447 04:24:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@45 -- # for sub in "$@" 00:20:48.447 04:24:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:48.447 04:24:00 -- target/dif.sh@36 -- # local sub_id=1 00:20:48.447 04:24:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 ************************************ 00:20:48.447 END TEST fio_dif_rand_params 00:20:48.447 ************************************ 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 00:20:48.447 real 0m24.704s 00:20:48.447 user 2m17.505s 00:20:48.447 sys 0m6.740s 00:20:48.447 04:24:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:48.447 04:24:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:48.447 04:24:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 ************************************ 00:20:48.447 START TEST fio_dif_digest 00:20:48.447 ************************************ 00:20:48.447 04:24:00 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:20:48.447 04:24:00 -- target/dif.sh@123 -- # local NULL_DIF 00:20:48.447 04:24:00 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:48.447 04:24:00 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:48.447 04:24:00 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:48.447 04:24:00 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:48.447 04:24:00 -- target/dif.sh@127 -- # numjobs=3 00:20:48.447 04:24:00 -- target/dif.sh@127 -- # iodepth=3 00:20:48.447 04:24:00 -- target/dif.sh@127 -- # runtime=10 00:20:48.447 04:24:00 -- target/dif.sh@128 -- # hdgst=true 00:20:48.447 04:24:00 -- target/dif.sh@128 -- # ddgst=true 00:20:48.447 04:24:00 -- target/dif.sh@130 -- # create_subsystems 0 00:20:48.447 04:24:00 -- target/dif.sh@28 -- # local sub 00:20:48.447 04:24:00 -- target/dif.sh@30 -- # for sub in "$@" 00:20:48.447 04:24:00 -- target/dif.sh@31 -- # create_subsystem 0 00:20:48.447 04:24:00 -- target/dif.sh@18 -- # local sub_id=0 00:20:48.447 04:24:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 bdev_null0 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.447 04:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.447 04:24:00 -- common/autotest_common.sh@10 -- # set +x 00:20:48.447 [2024-12-06 04:24:00.916859] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.447 04:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.447 04:24:00 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:48.447 04:24:00 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:48.447 04:24:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:48.447 04:24:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.447 04:24:00 -- nvmf/common.sh@520 -- # config=() 00:20:48.447 04:24:00 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.447 04:24:00 -- target/dif.sh@82 -- # gen_fio_conf 00:20:48.447 04:24:00 -- nvmf/common.sh@520 -- # local subsystem config 00:20:48.447 04:24:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:48.447 04:24:00 -- target/dif.sh@54 -- # local file 00:20:48.447 04:24:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:48.447 04:24:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:48.447 04:24:00 -- target/dif.sh@56 -- # cat 00:20:48.447 04:24:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:48.447 04:24:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:48.447 { 00:20:48.447 "params": { 00:20:48.448 "name": "Nvme$subsystem", 00:20:48.448 "trtype": "$TEST_TRANSPORT", 00:20:48.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.448 "adrfam": "ipv4", 00:20:48.448 "trsvcid": "$NVMF_PORT", 00:20:48.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.448 "hdgst": ${hdgst:-false}, 00:20:48.448 "ddgst": ${ddgst:-false} 00:20:48.448 }, 00:20:48.448 "method": "bdev_nvme_attach_controller" 00:20:48.448 } 00:20:48.448 EOF 00:20:48.448 )") 00:20:48.448 04:24:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.448 04:24:00 -- common/autotest_common.sh@1330 -- # shift 00:20:48.448 04:24:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:48.448 04:24:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.448 04:24:00 -- nvmf/common.sh@542 -- # cat 00:20:48.448 04:24:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:48.448 04:24:00 -- target/dif.sh@72 -- # (( file <= files )) 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.448 04:24:00 -- nvmf/common.sh@544 -- # jq . 00:20:48.448 04:24:00 -- nvmf/common.sh@545 -- # IFS=, 00:20:48.448 04:24:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:48.448 "params": { 00:20:48.448 "name": "Nvme0", 00:20:48.448 "trtype": "tcp", 00:20:48.448 "traddr": "10.0.0.2", 00:20:48.448 "adrfam": "ipv4", 00:20:48.448 "trsvcid": "4420", 00:20:48.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:48.448 "hdgst": true, 00:20:48.448 "ddgst": true 00:20:48.448 }, 00:20:48.448 "method": "bdev_nvme_attach_controller" 00:20:48.448 }' 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:48.448 04:24:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:48.448 04:24:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:48.448 04:24:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:48.448 04:24:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:48.448 04:24:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:48.448 04:24:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:48.707 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:48.707 ... 00:20:48.707 fio-3.35 00:20:48.707 Starting 3 threads 00:20:49.274 [2024-12-06 04:24:01.566532] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:49.274 [2024-12-06 04:24:01.566671] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:59.272 00:20:59.272 filename0: (groupid=0, jobs=1): err= 0: pid=87687: Fri Dec 6 04:24:11 2024 00:20:59.272 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(325MiB/10001msec) 00:20:59.272 slat (nsec): min=6524, max=49072, avg=9516.33, stdev=4166.63 00:20:59.272 clat (usec): min=10463, max=17037, avg=11511.33, stdev=528.84 00:20:59.272 lat (usec): min=10471, max=17049, avg=11520.84, stdev=529.09 00:20:59.272 clat percentiles (usec): 00:20:59.272 | 1.00th=[10945], 5.00th=[10945], 10.00th=[11076], 20.00th=[11076], 00:20:59.272 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:20:59.272 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:20:59.272 | 99.00th=[13173], 99.50th=[13435], 99.90th=[16909], 99.95th=[16909], 00:20:59.272 | 99.99th=[16909] 00:20:59.272 bw ( KiB/s): min=31488, max=34560, per=33.27%, avg=33226.11, stdev=1018.93, samples=19 00:20:59.272 iops : min= 246, max= 270, avg=259.58, stdev= 7.96, samples=19 00:20:59.272 lat (msec) : 20=100.00% 00:20:59.272 cpu : usr=91.35%, sys=8.08%, ctx=11, majf=0, minf=9 00:20:59.272 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.272 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.272 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.272 filename0: (groupid=0, jobs=1): err= 0: pid=87688: Fri Dec 6 04:24:11 2024 00:20:59.272 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(325MiB/10003msec) 00:20:59.272 slat (nsec): min=6695, max=44935, avg=13223.81, stdev=4526.68 00:20:59.272 clat (usec): min=10791, max=16985, avg=11508.14, stdev=527.94 00:20:59.272 lat (usec): min=10804, max=16997, avg=11521.36, stdev=528.26 00:20:59.272 clat percentiles (usec): 00:20:59.272 | 1.00th=[10945], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:20:59.272 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:20:59.272 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:20:59.272 | 99.00th=[13042], 99.50th=[13435], 99.90th=[16909], 99.95th=[16909], 00:20:59.272 | 99.99th=[16909] 00:20:59.272 bw ( KiB/s): min=31488, max=34560, per=33.27%, avg=33226.11, stdev=1018.93, samples=19 00:20:59.272 iops : min= 246, max= 270, avg=259.58, stdev= 7.96, samples=19 00:20:59.272 lat (msec) : 20=100.00% 00:20:59.272 cpu : usr=91.76%, sys=7.72%, ctx=6, majf=0, minf=9 00:20:59.272 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.272 issued rwts: total=2601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.272 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.272 filename0: (groupid=0, jobs=1): err= 0: pid=87689: Fri Dec 6 04:24:11 2024 00:20:59.272 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(326MiB/10005msec) 00:20:59.272 slat (nsec): min=6655, max=45868, avg=13581.49, stdev=4629.51 00:20:59.272 clat (usec): min=5112, max=16984, avg=11495.08, stdev=557.04 00:20:59.272 lat (usec): min=5119, max=16998, avg=11508.66, stdev=557.34 00:20:59.272 clat percentiles (usec): 00:20:59.272 | 1.00th=[10945], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:20:59.272 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:20:59.272 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12387], 00:20:59.272 | 99.00th=[13042], 99.50th=[13304], 99.90th=[16909], 99.95th=[16909], 00:20:59.272 | 99.99th=[16909] 00:20:59.272 bw ( KiB/s): min=31425, max=34560, per=33.31%, avg=33263.21, stdev=999.57, samples=19 00:20:59.272 iops : min= 245, max= 270, avg=259.84, stdev= 7.86, samples=19 00:20:59.272 lat (msec) : 10=0.12%, 20=99.88% 00:20:59.272 cpu : usr=91.91%, sys=7.57%, ctx=17, majf=0, minf=9 00:20:59.273 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.273 issued rwts: total=2604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:59.273 00:20:59.273 Run status group 0 (all jobs): 00:20:59.273 READ: bw=97.5MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=976MiB (1023MB), run=10001-10005msec 00:20:59.531 04:24:11 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:59.531 04:24:11 -- target/dif.sh@43 -- # local sub 00:20:59.531 04:24:11 -- target/dif.sh@45 -- # for sub in "$@" 00:20:59.531 04:24:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:59.531 04:24:11 -- target/dif.sh@36 -- # local sub_id=0 00:20:59.531 04:24:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:59.531 04:24:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.531 04:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 04:24:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.531 04:24:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:59.531 04:24:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.531 04:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 ************************************ 00:20:59.531 END TEST fio_dif_digest 00:20:59.531 ************************************ 00:20:59.531 04:24:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.531 00:20:59.531 real 0m11.079s 00:20:59.531 user 0m28.196s 00:20:59.531 sys 0m2.652s 00:20:59.531 04:24:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:59.531 04:24:11 -- common/autotest_common.sh@10 -- # set +x 00:20:59.531 04:24:11 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:59.531 04:24:11 -- target/dif.sh@147 -- # nvmftestfini 00:20:59.531 04:24:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.531 04:24:11 -- nvmf/common.sh@116 -- # sync 00:20:59.531 04:24:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.531 04:24:12 -- nvmf/common.sh@119 -- # set +e 00:20:59.531 04:24:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.531 04:24:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.531 rmmod nvme_tcp 00:20:59.531 rmmod nvme_fabrics 00:20:59.531 rmmod nvme_keyring 00:20:59.531 04:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.790 04:24:12 -- nvmf/common.sh@123 -- # set -e 00:20:59.790 04:24:12 -- nvmf/common.sh@124 -- # return 0 00:20:59.790 04:24:12 -- nvmf/common.sh@477 -- # '[' -n 86912 ']' 00:20:59.790 04:24:12 -- nvmf/common.sh@478 -- # killprocess 86912 00:20:59.790 04:24:12 -- common/autotest_common.sh@936 -- # '[' -z 86912 ']' 00:20:59.790 04:24:12 -- common/autotest_common.sh@940 -- # kill -0 86912 00:20:59.790 04:24:12 -- common/autotest_common.sh@941 -- # uname 00:20:59.790 04:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:59.790 04:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86912 00:20:59.790 killing process with pid 86912 00:20:59.790 04:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:59.790 04:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:59.790 04:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86912' 00:20:59.790 04:24:12 -- common/autotest_common.sh@955 -- # kill 86912 00:20:59.790 04:24:12 -- common/autotest_common.sh@960 -- # wait 86912 00:21:00.047 04:24:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:00.047 04:24:12 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:00.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:00.305 Waiting for block devices as requested 00:21:00.305 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.305 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.563 04:24:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:00.563 04:24:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:00.564 04:24:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.564 04:24:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:00.564 04:24:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.564 04:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:00.564 04:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.564 04:24:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:00.564 00:21:00.564 real 1m1.053s 00:21:00.564 user 4m1.328s 00:21:00.564 sys 0m18.696s 00:21:00.564 04:24:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:00.564 04:24:12 -- common/autotest_common.sh@10 -- # set +x 00:21:00.564 ************************************ 00:21:00.564 END TEST nvmf_dif 00:21:00.564 ************************************ 00:21:00.564 04:24:12 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:00.564 04:24:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:00.564 04:24:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:00.564 04:24:12 -- common/autotest_common.sh@10 -- # set +x 00:21:00.564 ************************************ 00:21:00.564 START TEST nvmf_abort_qd_sizes 00:21:00.564 ************************************ 00:21:00.564 04:24:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:00.564 * Looking for test storage... 00:21:00.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:00.564 04:24:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:00.564 04:24:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:00.564 04:24:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:00.823 04:24:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:00.823 04:24:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:00.823 04:24:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:00.823 04:24:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:00.823 04:24:13 -- scripts/common.sh@335 -- # IFS=.-: 00:21:00.823 04:24:13 -- scripts/common.sh@335 -- # read -ra ver1 00:21:00.823 04:24:13 -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.823 04:24:13 -- scripts/common.sh@336 -- # read -ra ver2 00:21:00.823 04:24:13 -- scripts/common.sh@337 -- # local 'op=<' 00:21:00.823 04:24:13 -- scripts/common.sh@339 -- # ver1_l=2 00:21:00.823 04:24:13 -- scripts/common.sh@340 -- # ver2_l=1 00:21:00.823 04:24:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:00.823 04:24:13 -- scripts/common.sh@343 -- # case "$op" in 00:21:00.823 04:24:13 -- scripts/common.sh@344 -- # : 1 00:21:00.823 04:24:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:00.823 04:24:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.823 04:24:13 -- scripts/common.sh@364 -- # decimal 1 00:21:00.823 04:24:13 -- scripts/common.sh@352 -- # local d=1 00:21:00.823 04:24:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.823 04:24:13 -- scripts/common.sh@354 -- # echo 1 00:21:00.823 04:24:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:00.823 04:24:13 -- scripts/common.sh@365 -- # decimal 2 00:21:00.823 04:24:13 -- scripts/common.sh@352 -- # local d=2 00:21:00.823 04:24:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.823 04:24:13 -- scripts/common.sh@354 -- # echo 2 00:21:00.823 04:24:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:00.823 04:24:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:00.823 04:24:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:00.823 04:24:13 -- scripts/common.sh@367 -- # return 0 00:21:00.823 04:24:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.823 04:24:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.823 --rc genhtml_branch_coverage=1 00:21:00.823 --rc genhtml_function_coverage=1 00:21:00.823 --rc genhtml_legend=1 00:21:00.823 --rc geninfo_all_blocks=1 00:21:00.823 --rc geninfo_unexecuted_blocks=1 00:21:00.823 00:21:00.823 ' 00:21:00.823 04:24:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.823 --rc genhtml_branch_coverage=1 00:21:00.823 --rc genhtml_function_coverage=1 00:21:00.823 --rc genhtml_legend=1 00:21:00.823 --rc geninfo_all_blocks=1 00:21:00.823 --rc geninfo_unexecuted_blocks=1 00:21:00.823 00:21:00.823 ' 00:21:00.823 04:24:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.823 --rc genhtml_branch_coverage=1 00:21:00.823 --rc genhtml_function_coverage=1 00:21:00.823 --rc genhtml_legend=1 00:21:00.823 --rc geninfo_all_blocks=1 00:21:00.823 --rc geninfo_unexecuted_blocks=1 00:21:00.823 00:21:00.823 ' 00:21:00.823 04:24:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:00.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.823 --rc genhtml_branch_coverage=1 00:21:00.823 --rc genhtml_function_coverage=1 00:21:00.823 --rc genhtml_legend=1 00:21:00.823 --rc geninfo_all_blocks=1 00:21:00.823 --rc geninfo_unexecuted_blocks=1 00:21:00.823 00:21:00.823 ' 00:21:00.823 04:24:13 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.823 04:24:13 -- nvmf/common.sh@7 -- # uname -s 00:21:00.823 04:24:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.823 04:24:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.823 04:24:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.823 04:24:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.823 04:24:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.823 04:24:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.823 04:24:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.823 04:24:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.823 04:24:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.823 04:24:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.823 04:24:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:21:00.823 04:24:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cb4d3929-adbe-4142-b5d1-990bbf2d4fca 00:21:00.823 04:24:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.823 04:24:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.823 04:24:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.823 04:24:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.823 04:24:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.823 04:24:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.823 04:24:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.823 04:24:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.823 04:24:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.823 04:24:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.823 04:24:13 -- paths/export.sh@5 -- # export PATH 00:21:00.823 04:24:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.823 04:24:13 -- nvmf/common.sh@46 -- # : 0 00:21:00.823 04:24:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:00.823 04:24:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:00.823 04:24:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:00.823 04:24:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.823 04:24:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.823 04:24:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:00.823 04:24:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:00.823 04:24:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:00.823 04:24:13 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:21:00.823 04:24:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:00.823 04:24:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.823 04:24:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:00.823 04:24:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:00.823 04:24:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:00.823 04:24:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.823 04:24:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:00.823 04:24:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.823 04:24:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:00.823 04:24:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:00.823 04:24:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:00.824 04:24:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:00.824 04:24:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:00.824 04:24:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:00.824 04:24:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.824 04:24:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.824 04:24:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:00.824 04:24:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:00.824 04:24:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.824 04:24:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.824 04:24:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.824 04:24:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.824 04:24:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.824 04:24:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.824 04:24:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.824 04:24:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.824 04:24:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:00.824 04:24:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:00.824 Cannot find device "nvmf_tgt_br" 00:21:00.824 04:24:13 -- nvmf/common.sh@154 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.824 Cannot find device "nvmf_tgt_br2" 00:21:00.824 04:24:13 -- nvmf/common.sh@155 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:00.824 04:24:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:00.824 Cannot find device "nvmf_tgt_br" 00:21:00.824 04:24:13 -- nvmf/common.sh@157 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:00.824 Cannot find device "nvmf_tgt_br2" 00:21:00.824 04:24:13 -- nvmf/common.sh@158 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:00.824 04:24:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:00.824 04:24:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.824 04:24:13 -- nvmf/common.sh@161 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.824 04:24:13 -- nvmf/common.sh@162 -- # true 00:21:00.824 04:24:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.824 04:24:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.824 04:24:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.824 04:24:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:01.083 04:24:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:01.083 04:24:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.083 04:24:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.083 04:24:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:01.083 04:24:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:01.083 04:24:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:01.083 04:24:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:01.083 04:24:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:01.083 04:24:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:01.083 04:24:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.083 04:24:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:01.083 04:24:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:01.083 04:24:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:01.083 04:24:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:01.083 04:24:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:01.083 04:24:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:01.083 04:24:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:01.083 04:24:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:01.083 04:24:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:01.083 04:24:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:01.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:01.083 00:21:01.083 --- 10.0.0.2 ping statistics --- 00:21:01.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.083 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:01.083 04:24:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:01.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:01.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:21:01.083 00:21:01.083 --- 10.0.0.3 ping statistics --- 00:21:01.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.083 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:01.083 04:24:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:01.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:01.083 00:21:01.083 --- 10.0.0.1 ping statistics --- 00:21:01.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.083 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:01.083 04:24:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.083 04:24:13 -- nvmf/common.sh@421 -- # return 0 00:21:01.083 04:24:13 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:21:01.083 04:24:13 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.909 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.909 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:21:01.909 04:24:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.909 04:24:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:01.909 04:24:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:01.909 04:24:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.909 04:24:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:01.909 04:24:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:01.909 04:24:14 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:21:01.909 04:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:01.909 04:24:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.909 04:24:14 -- common/autotest_common.sh@10 -- # set +x 00:21:01.909 04:24:14 -- nvmf/common.sh@469 -- # nvmfpid=88288 00:21:01.909 04:24:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:01.909 04:24:14 -- nvmf/common.sh@470 -- # waitforlisten 88288 00:21:01.909 04:24:14 -- common/autotest_common.sh@829 -- # '[' -z 88288 ']' 00:21:01.909 04:24:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.909 04:24:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.909 04:24:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.909 04:24:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.909 04:24:14 -- common/autotest_common.sh@10 -- # set +x 00:21:02.168 [2024-12-06 04:24:14.482214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:02.168 [2024-12-06 04:24:14.482297] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.168 [2024-12-06 04:24:14.624608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.168 [2024-12-06 04:24:14.713215] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.168 [2024-12-06 04:24:14.713442] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.168 [2024-12-06 04:24:14.713462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.168 [2024-12-06 04:24:14.713474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.168 [2024-12-06 04:24:14.713972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.168 [2024-12-06 04:24:14.714119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.168 [2024-12-06 04:24:14.714211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.168 [2024-12-06 04:24:14.714204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.122 04:24:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.122 04:24:15 -- common/autotest_common.sh@862 -- # return 0 00:21:03.122 04:24:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.122 04:24:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.122 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.122 04:24:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:21:03.122 04:24:15 -- scripts/common.sh@311 -- # local bdf bdfs 00:21:03.122 04:24:15 -- scripts/common.sh@312 -- # local nvmes 00:21:03.122 04:24:15 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:21:03.122 04:24:15 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:03.122 04:24:15 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:21:03.122 04:24:15 -- scripts/common.sh@297 -- # local bdf= 00:21:03.122 04:24:15 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:21:03.122 04:24:15 -- scripts/common.sh@232 -- # local class 00:21:03.122 04:24:15 -- scripts/common.sh@233 -- # local subclass 00:21:03.122 04:24:15 -- scripts/common.sh@234 -- # local progif 00:21:03.122 04:24:15 -- scripts/common.sh@235 -- # printf %02x 1 00:21:03.122 04:24:15 -- scripts/common.sh@235 -- # class=01 00:21:03.122 04:24:15 -- scripts/common.sh@236 -- # printf %02x 8 00:21:03.122 04:24:15 -- scripts/common.sh@236 -- # subclass=08 00:21:03.122 04:24:15 -- scripts/common.sh@237 -- # printf %02x 2 00:21:03.122 04:24:15 -- scripts/common.sh@237 -- # progif=02 00:21:03.122 04:24:15 -- scripts/common.sh@239 -- # hash lspci 00:21:03.122 04:24:15 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:21:03.122 04:24:15 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:21:03.122 04:24:15 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:03.122 04:24:15 -- scripts/common.sh@242 -- # grep -i -- -p02 00:21:03.122 04:24:15 -- scripts/common.sh@244 -- # tr -d '"' 00:21:03.122 04:24:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:03.122 04:24:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:21:03.122 04:24:15 -- scripts/common.sh@15 -- # local i 00:21:03.122 04:24:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:21:03.122 04:24:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:03.122 04:24:15 -- scripts/common.sh@24 -- # return 0 00:21:03.122 04:24:15 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:21:03.122 04:24:15 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:03.122 04:24:15 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:21:03.122 04:24:15 -- scripts/common.sh@15 -- # local i 00:21:03.122 04:24:15 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:21:03.122 04:24:15 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:03.122 04:24:15 -- scripts/common.sh@24 -- # return 0 00:21:03.122 04:24:15 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:21:03.122 04:24:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:03.122 04:24:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:21:03.122 04:24:15 -- scripts/common.sh@322 -- # uname -s 00:21:03.122 04:24:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:03.122 04:24:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:03.122 04:24:15 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:03.122 04:24:15 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:21:03.122 04:24:15 -- scripts/common.sh@322 -- # uname -s 00:21:03.122 04:24:15 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:03.122 04:24:15 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:03.122 04:24:15 -- scripts/common.sh@327 -- # (( 2 )) 00:21:03.122 04:24:15 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:21:03.122 04:24:15 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:21:03.122 04:24:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:03.122 04:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.122 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.123 ************************************ 00:21:03.123 START TEST spdk_target_abort 00:21:03.123 ************************************ 00:21:03.123 04:24:15 -- common/autotest_common.sh@1114 -- # spdk_target 00:21:03.123 04:24:15 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:03.123 04:24:15 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:03.123 04:24:15 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:21:03.123 04:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.123 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.123 spdk_targetn1 00:21:03.123 04:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.123 04:24:15 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.123 04:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.123 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 [2024-12-06 04:24:15.685566] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.382 04:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:21:03.382 04:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.382 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 04:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:21:03.382 04:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.382 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 04:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:21:03.382 04:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.382 04:24:15 -- common/autotest_common.sh@10 -- # set +x 00:21:03.382 [2024-12-06 04:24:15.713788] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.382 04:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.382 04:24:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:06.666 Initializing NVMe Controllers 00:21:06.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:06.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:06.666 Initialization complete. Launching workers. 00:21:06.666 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9437, failed: 0 00:21:06.666 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1062, failed to submit 8375 00:21:06.666 success 819, unsuccess 243, failed 0 00:21:06.666 04:24:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:06.666 04:24:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:09.981 Initializing NVMe Controllers 00:21:09.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:09.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:09.981 Initialization complete. Launching workers. 00:21:09.981 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8938, failed: 0 00:21:09.981 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1170, failed to submit 7768 00:21:09.981 success 408, unsuccess 762, failed 0 00:21:09.981 04:24:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:09.981 04:24:22 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:13.273 Initializing NVMe Controllers 00:21:13.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:13.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:13.273 Initialization complete. Launching workers. 00:21:13.273 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30114, failed: 0 00:21:13.273 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2299, failed to submit 27815 00:21:13.273 success 464, unsuccess 1835, failed 0 00:21:13.273 04:24:25 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:21:13.273 04:24:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.273 04:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:13.273 04:24:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.273 04:24:25 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:13.273 04:24:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.273 04:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:13.273 04:24:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.273 04:24:25 -- target/abort_qd_sizes.sh@62 -- # killprocess 88288 00:21:13.273 04:24:25 -- common/autotest_common.sh@936 -- # '[' -z 88288 ']' 00:21:13.273 04:24:25 -- common/autotest_common.sh@940 -- # kill -0 88288 00:21:13.273 04:24:25 -- common/autotest_common.sh@941 -- # uname 00:21:13.273 04:24:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.274 04:24:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88288 00:21:13.274 killing process with pid 88288 00:21:13.274 04:24:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.274 04:24:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.274 04:24:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88288' 00:21:13.274 04:24:25 -- common/autotest_common.sh@955 -- # kill 88288 00:21:13.274 04:24:25 -- common/autotest_common.sh@960 -- # wait 88288 00:21:13.532 00:21:13.532 real 0m10.420s 00:21:13.532 user 0m42.871s 00:21:13.532 sys 0m1.903s 00:21:13.532 04:24:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:13.532 ************************************ 00:21:13.532 END TEST spdk_target_abort 00:21:13.532 ************************************ 00:21:13.532 04:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:13.532 04:24:26 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:21:13.532 04:24:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:13.532 04:24:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.532 04:24:26 -- common/autotest_common.sh@10 -- # set +x 00:21:13.532 ************************************ 00:21:13.532 START TEST kernel_target_abort 00:21:13.532 ************************************ 00:21:13.532 04:24:26 -- common/autotest_common.sh@1114 -- # kernel_target 00:21:13.532 04:24:26 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:21:13.532 04:24:26 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:21:13.532 04:24:26 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:21:13.532 04:24:26 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:21:13.532 04:24:26 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:21:13.532 04:24:26 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:13.532 04:24:26 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:13.532 04:24:26 -- nvmf/common.sh@627 -- # local block nvme 00:21:13.532 04:24:26 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:21:13.532 04:24:26 -- nvmf/common.sh@630 -- # modprobe nvmet 00:21:13.790 04:24:26 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:13.790 04:24:26 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:14.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.049 Waiting for block devices as requested 00:21:14.049 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.049 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.307 04:24:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:14.308 04:24:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:21:14.308 04:24:26 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:21:14.308 04:24:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:14.308 No valid GPT data, bailing 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # pt= 00:21:14.308 04:24:26 -- scripts/common.sh@394 -- # return 1 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:21:14.308 04:24:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:14.308 04:24:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:21:14.308 04:24:26 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:21:14.308 04:24:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:14.308 No valid GPT data, bailing 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # pt= 00:21:14.308 04:24:26 -- scripts/common.sh@394 -- # return 1 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:21:14.308 04:24:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:14.308 04:24:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:21:14.308 04:24:26 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:21:14.308 04:24:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:21:14.308 No valid GPT data, bailing 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:21:14.308 04:24:26 -- scripts/common.sh@393 -- # pt= 00:21:14.308 04:24:26 -- scripts/common.sh@394 -- # return 1 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:21:14.308 04:24:26 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:14.308 04:24:26 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:21:14.308 04:24:26 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:21:14.308 04:24:26 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:21:14.308 04:24:26 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:21:14.566 No valid GPT data, bailing 00:21:14.566 04:24:26 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:21:14.566 04:24:26 -- scripts/common.sh@393 -- # pt= 00:21:14.566 04:24:26 -- scripts/common.sh@394 -- # return 1 00:21:14.566 04:24:26 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:21:14.566 04:24:26 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:21:14.566 04:24:26 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:14.566 04:24:26 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:14.566 04:24:26 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:14.566 04:24:26 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:21:14.566 04:24:26 -- nvmf/common.sh@654 -- # echo 1 00:21:14.566 04:24:26 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:21:14.566 04:24:26 -- nvmf/common.sh@656 -- # echo 1 00:21:14.566 04:24:26 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:21:14.566 04:24:26 -- nvmf/common.sh@663 -- # echo tcp 00:21:14.566 04:24:26 -- nvmf/common.sh@664 -- # echo 4420 00:21:14.566 04:24:26 -- nvmf/common.sh@665 -- # echo ipv4 00:21:14.566 04:24:26 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:14.566 04:24:26 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cb4d3929-adbe-4142-b5d1-990bbf2d4fca --hostid=cb4d3929-adbe-4142-b5d1-990bbf2d4fca -a 10.0.0.1 -t tcp -s 4420 00:21:14.566 00:21:14.566 Discovery Log Number of Records 2, Generation counter 2 00:21:14.566 =====Discovery Log Entry 0====== 00:21:14.566 trtype: tcp 00:21:14.566 adrfam: ipv4 00:21:14.566 subtype: current discovery subsystem 00:21:14.566 treq: not specified, sq flow control disable supported 00:21:14.566 portid: 1 00:21:14.566 trsvcid: 4420 00:21:14.566 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:14.566 traddr: 10.0.0.1 00:21:14.566 eflags: none 00:21:14.566 sectype: none 00:21:14.566 =====Discovery Log Entry 1====== 00:21:14.566 trtype: tcp 00:21:14.566 adrfam: ipv4 00:21:14.566 subtype: nvme subsystem 00:21:14.566 treq: not specified, sq flow control disable supported 00:21:14.566 portid: 1 00:21:14.566 trsvcid: 4420 00:21:14.566 subnqn: kernel_target 00:21:14.566 traddr: 10.0.0.1 00:21:14.566 eflags: none 00:21:14.566 sectype: none 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:14.566 04:24:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:17.851 Initializing NVMe Controllers 00:21:17.851 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:17.851 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:17.851 Initialization complete. Launching workers. 00:21:17.851 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30221, failed: 0 00:21:17.851 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30221, failed to submit 0 00:21:17.851 success 0, unsuccess 30221, failed 0 00:21:17.851 04:24:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:17.851 04:24:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:21.136 Initializing NVMe Controllers 00:21:21.136 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:21.136 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:21.136 Initialization complete. Launching workers. 00:21:21.136 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65941, failed: 0 00:21:21.136 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26448, failed to submit 39493 00:21:21.136 success 0, unsuccess 26448, failed 0 00:21:21.136 04:24:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:21.136 04:24:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:24.433 Initializing NVMe Controllers 00:21:24.434 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:24.434 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:24.434 Initialization complete. Launching workers. 00:21:24.434 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76773, failed: 0 00:21:24.434 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19198, failed to submit 57575 00:21:24.434 success 0, unsuccess 19198, failed 0 00:21:24.434 04:24:36 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:21:24.434 04:24:36 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:21:24.434 04:24:36 -- nvmf/common.sh@677 -- # echo 0 00:21:24.434 04:24:36 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:21:24.434 04:24:36 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:24.434 04:24:36 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:24.434 04:24:36 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:24.434 04:24:36 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:21:24.434 04:24:36 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:21:24.434 ************************************ 00:21:24.434 END TEST kernel_target_abort 00:21:24.434 ************************************ 00:21:24.434 00:21:24.434 real 0m10.446s 00:21:24.434 user 0m5.269s 00:21:24.434 sys 0m2.502s 00:21:24.434 04:24:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:24.434 04:24:36 -- common/autotest_common.sh@10 -- # set +x 00:21:24.434 04:24:36 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:21:24.434 04:24:36 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:21:24.434 04:24:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:24.434 04:24:36 -- nvmf/common.sh@116 -- # sync 00:21:24.434 04:24:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:24.434 04:24:36 -- nvmf/common.sh@119 -- # set +e 00:21:24.434 04:24:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:24.434 04:24:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:24.434 rmmod nvme_tcp 00:21:24.434 rmmod nvme_fabrics 00:21:24.434 rmmod nvme_keyring 00:21:24.434 04:24:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:24.434 04:24:36 -- nvmf/common.sh@123 -- # set -e 00:21:24.434 04:24:36 -- nvmf/common.sh@124 -- # return 0 00:21:24.434 04:24:36 -- nvmf/common.sh@477 -- # '[' -n 88288 ']' 00:21:24.434 04:24:36 -- nvmf/common.sh@478 -- # killprocess 88288 00:21:24.434 Process with pid 88288 is not found 00:21:24.434 04:24:36 -- common/autotest_common.sh@936 -- # '[' -z 88288 ']' 00:21:24.434 04:24:36 -- common/autotest_common.sh@940 -- # kill -0 88288 00:21:24.434 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (88288) - No such process 00:21:24.434 04:24:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 88288 is not found' 00:21:24.434 04:24:36 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:24.434 04:24:36 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:25.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:25.003 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:25.003 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:25.003 04:24:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:25.003 04:24:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:25.003 04:24:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.003 04:24:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:25.003 04:24:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.003 04:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:25.003 04:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.003 04:24:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:25.003 00:21:25.003 real 0m24.409s 00:21:25.003 user 0m49.603s 00:21:25.003 sys 0m5.685s 00:21:25.003 04:24:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:25.003 04:24:37 -- common/autotest_common.sh@10 -- # set +x 00:21:25.003 ************************************ 00:21:25.003 END TEST nvmf_abort_qd_sizes 00:21:25.003 ************************************ 00:21:25.003 04:24:37 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:25.003 04:24:37 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:21:25.003 04:24:37 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:21:25.003 04:24:37 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:25.003 04:24:37 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:25.003 04:24:37 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:21:25.003 04:24:37 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:21:25.003 04:24:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.003 04:24:37 -- common/autotest_common.sh@10 -- # set +x 00:21:25.003 04:24:37 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:21:25.003 04:24:37 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:21:25.003 04:24:37 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:21:25.003 04:24:37 -- common/autotest_common.sh@10 -- # set +x 00:21:26.909 INFO: APP EXITING 00:21:26.909 INFO: killing all VMs 00:21:26.909 INFO: killing vhost app 00:21:26.909 INFO: EXIT DONE 00:21:27.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.427 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:27.427 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:27.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.997 Cleaning 00:21:27.997 Removing: /var/run/dpdk/spdk0/config 00:21:27.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:27.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:27.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:27.997 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:27.997 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:27.997 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:27.997 Removing: /var/run/dpdk/spdk1/config 00:21:27.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:27.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:27.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:27.997 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:27.997 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:27.997 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:27.997 Removing: /var/run/dpdk/spdk2/config 00:21:27.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:27.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:27.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:27.997 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:27.997 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:27.997 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:27.997 Removing: /var/run/dpdk/spdk3/config 00:21:28.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:28.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:28.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:28.257 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:28.257 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:28.257 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:28.257 Removing: /var/run/dpdk/spdk4/config 00:21:28.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:28.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:28.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:28.257 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:28.257 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:28.257 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:28.257 Removing: /dev/shm/nvmf_trace.0 00:21:28.257 Removing: /dev/shm/spdk_tgt_trace.pid65835 00:21:28.257 Removing: /var/run/dpdk/spdk0 00:21:28.257 Removing: /var/run/dpdk/spdk1 00:21:28.257 Removing: /var/run/dpdk/spdk2 00:21:28.257 Removing: /var/run/dpdk/spdk3 00:21:28.257 Removing: /var/run/dpdk/spdk4 00:21:28.257 Removing: /var/run/dpdk/spdk_pid65683 00:21:28.257 Removing: /var/run/dpdk/spdk_pid65835 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66093 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66284 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66437 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66514 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66597 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66695 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66779 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66812 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66853 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66916 00:21:28.257 Removing: /var/run/dpdk/spdk_pid66999 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67450 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67502 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67553 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67569 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67636 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67652 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67721 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67737 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67788 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67807 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67847 00:21:28.257 Removing: /var/run/dpdk/spdk_pid67865 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68000 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68030 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68118 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68171 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68195 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68259 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68279 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68313 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68333 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68367 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68387 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68421 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68441 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68481 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68495 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68535 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68549 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68589 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68607 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68643 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68657 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68697 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68711 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68751 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68765 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68805 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68819 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68854 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68873 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68902 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68926 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68956 00:21:28.257 Removing: /var/run/dpdk/spdk_pid68978 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69012 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69032 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69061 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69080 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69115 00:21:28.257 Removing: /var/run/dpdk/spdk_pid69137 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69175 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69192 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69235 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69249 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69289 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69303 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69343 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69410 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69510 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69848 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69865 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69896 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69909 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69922 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69946 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69958 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69972 00:21:28.517 Removing: /var/run/dpdk/spdk_pid69995 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70008 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70021 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70039 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70057 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70071 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70089 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70101 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70115 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70137 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70151 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70164 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70194 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70212 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70245 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70309 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70336 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70351 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70374 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70389 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70391 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70437 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70443 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70475 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70487 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70490 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70503 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70505 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70518 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70520 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70533 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70560 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70586 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70596 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70624 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70638 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70641 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70687 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70699 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70725 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70733 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70740 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70748 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70755 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70763 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70776 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70778 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70859 00:21:28.517 Removing: /var/run/dpdk/spdk_pid70905 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71018 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71055 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71099 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71108 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71128 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71148 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71178 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71192 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71268 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71288 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71331 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71416 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71472 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71507 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71610 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71646 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71683 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71911 00:21:28.517 Removing: /var/run/dpdk/spdk_pid71999 00:21:28.517 Removing: /var/run/dpdk/spdk_pid72026 00:21:28.517 Removing: /var/run/dpdk/spdk_pid72363 00:21:28.517 Removing: /var/run/dpdk/spdk_pid72401 00:21:28.776 Removing: /var/run/dpdk/spdk_pid72712 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73129 00:21:28.776 Removing: /var/run/dpdk/spdk_pid73393 00:21:28.776 Removing: /var/run/dpdk/spdk_pid74187 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75039 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75152 00:21:28.776 Removing: /var/run/dpdk/spdk_pid75225 00:21:28.776 Removing: /var/run/dpdk/spdk_pid76515 00:21:28.776 Removing: /var/run/dpdk/spdk_pid76739 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77062 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77173 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77307 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77334 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77362 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77395 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77492 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77632 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77782 00:21:28.776 Removing: /var/run/dpdk/spdk_pid77857 00:21:28.776 Removing: /var/run/dpdk/spdk_pid78252 00:21:28.776 Removing: /var/run/dpdk/spdk_pid78601 00:21:28.776 Removing: /var/run/dpdk/spdk_pid78609 00:21:28.776 Removing: /var/run/dpdk/spdk_pid80823 00:21:28.776 Removing: /var/run/dpdk/spdk_pid80832 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81109 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81129 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81143 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81168 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81185 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81263 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81270 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81378 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81386 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81494 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81496 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81904 00:21:28.776 Removing: /var/run/dpdk/spdk_pid81947 00:21:28.776 Removing: /var/run/dpdk/spdk_pid82056 00:21:28.776 Removing: /var/run/dpdk/spdk_pid82135 00:21:28.776 Removing: /var/run/dpdk/spdk_pid82451 00:21:28.776 Removing: /var/run/dpdk/spdk_pid82657 00:21:28.776 Removing: /var/run/dpdk/spdk_pid83050 00:21:28.776 Removing: /var/run/dpdk/spdk_pid83584 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84028 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84090 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84141 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84198 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84319 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84379 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84438 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84500 00:21:28.776 Removing: /var/run/dpdk/spdk_pid84829 00:21:28.776 Removing: /var/run/dpdk/spdk_pid86015 00:21:28.777 Removing: /var/run/dpdk/spdk_pid86160 00:21:28.777 Removing: /var/run/dpdk/spdk_pid86404 00:21:28.777 Removing: /var/run/dpdk/spdk_pid86969 00:21:28.777 Removing: /var/run/dpdk/spdk_pid87134 00:21:28.777 Removing: /var/run/dpdk/spdk_pid87292 00:21:28.777 Removing: /var/run/dpdk/spdk_pid87389 00:21:28.777 Removing: /var/run/dpdk/spdk_pid87564 00:21:28.777 Removing: /var/run/dpdk/spdk_pid87673 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88340 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88380 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88415 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88658 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88699 00:21:28.777 Removing: /var/run/dpdk/spdk_pid88730 00:21:28.777 Clean 00:21:29.036 killing process with pid 60044 00:21:29.036 killing process with pid 60045 00:21:29.036 04:24:41 -- common/autotest_common.sh@1446 -- # return 0 00:21:29.036 04:24:41 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:21:29.036 04:24:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.036 04:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 04:24:41 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:21:29.036 04:24:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.036 04:24:41 -- common/autotest_common.sh@10 -- # set +x 00:21:29.036 04:24:41 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:29.036 04:24:41 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:29.036 04:24:41 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:29.036 04:24:41 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:21:29.036 04:24:41 -- spdk/autotest.sh@383 -- # hostname 00:21:29.036 04:24:41 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:29.295 geninfo: WARNING: invalid characters removed from testname! 00:21:51.246 04:25:02 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:53.151 04:25:05 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:55.056 04:25:07 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.593 04:25:09 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.494 04:25:12 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.021 04:25:14 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.552 04:25:16 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:04.552 04:25:16 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:22:04.552 04:25:16 -- common/autotest_common.sh@1690 -- $ lcov --version 00:22:04.552 04:25:16 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:22:04.552 04:25:16 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:22:04.552 04:25:16 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:22:04.552 04:25:16 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:22:04.552 04:25:17 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:22:04.552 04:25:17 -- scripts/common.sh@335 -- $ IFS=.-: 00:22:04.552 04:25:17 -- scripts/common.sh@335 -- $ read -ra ver1 00:22:04.552 04:25:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:04.552 04:25:17 -- scripts/common.sh@336 -- $ read -ra ver2 00:22:04.552 04:25:17 -- scripts/common.sh@337 -- $ local 'op=<' 00:22:04.552 04:25:17 -- scripts/common.sh@339 -- $ ver1_l=2 00:22:04.552 04:25:17 -- scripts/common.sh@340 -- $ ver2_l=1 00:22:04.552 04:25:17 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:22:04.552 04:25:17 -- scripts/common.sh@343 -- $ case "$op" in 00:22:04.552 04:25:17 -- scripts/common.sh@344 -- $ : 1 00:22:04.552 04:25:17 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:22:04.552 04:25:17 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.552 04:25:17 -- scripts/common.sh@364 -- $ decimal 1 00:22:04.552 04:25:17 -- scripts/common.sh@352 -- $ local d=1 00:22:04.552 04:25:17 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:04.552 04:25:17 -- scripts/common.sh@354 -- $ echo 1 00:22:04.552 04:25:17 -- scripts/common.sh@364 -- $ ver1[v]=1 00:22:04.552 04:25:17 -- scripts/common.sh@365 -- $ decimal 2 00:22:04.552 04:25:17 -- scripts/common.sh@352 -- $ local d=2 00:22:04.552 04:25:17 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:04.552 04:25:17 -- scripts/common.sh@354 -- $ echo 2 00:22:04.552 04:25:17 -- scripts/common.sh@365 -- $ ver2[v]=2 00:22:04.552 04:25:17 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:22:04.552 04:25:17 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:22:04.552 04:25:17 -- scripts/common.sh@367 -- $ return 0 00:22:04.552 04:25:17 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.552 04:25:17 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:22:04.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.552 --rc genhtml_branch_coverage=1 00:22:04.552 --rc genhtml_function_coverage=1 00:22:04.552 --rc genhtml_legend=1 00:22:04.552 --rc geninfo_all_blocks=1 00:22:04.552 --rc geninfo_unexecuted_blocks=1 00:22:04.552 00:22:04.552 ' 00:22:04.552 04:25:17 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:22:04.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.552 --rc genhtml_branch_coverage=1 00:22:04.552 --rc genhtml_function_coverage=1 00:22:04.552 --rc genhtml_legend=1 00:22:04.552 --rc geninfo_all_blocks=1 00:22:04.552 --rc geninfo_unexecuted_blocks=1 00:22:04.552 00:22:04.552 ' 00:22:04.552 04:25:17 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:22:04.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.552 --rc genhtml_branch_coverage=1 00:22:04.552 --rc genhtml_function_coverage=1 00:22:04.552 --rc genhtml_legend=1 00:22:04.552 --rc geninfo_all_blocks=1 00:22:04.552 --rc geninfo_unexecuted_blocks=1 00:22:04.552 00:22:04.552 ' 00:22:04.552 04:25:17 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:22:04.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.552 --rc genhtml_branch_coverage=1 00:22:04.552 --rc genhtml_function_coverage=1 00:22:04.552 --rc genhtml_legend=1 00:22:04.552 --rc geninfo_all_blocks=1 00:22:04.552 --rc geninfo_unexecuted_blocks=1 00:22:04.552 00:22:04.552 ' 00:22:04.552 04:25:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:04.552 04:25:17 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:04.552 04:25:17 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.552 04:25:17 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.552 04:25:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.552 04:25:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.552 04:25:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.552 04:25:17 -- paths/export.sh@5 -- $ export PATH 00:22:04.552 04:25:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.552 04:25:17 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:04.552 04:25:17 -- common/autobuild_common.sh@440 -- $ date +%s 00:22:04.552 04:25:17 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733459117.XXXXXX 00:22:04.552 04:25:17 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733459117.1OuWyr 00:22:04.552 04:25:17 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:22:04.552 04:25:17 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:22:04.552 04:25:17 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:04.552 04:25:17 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:04.552 04:25:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:04.552 04:25:17 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:04.552 04:25:17 -- common/autobuild_common.sh@456 -- $ get_config_params 00:22:04.553 04:25:17 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:22:04.553 04:25:17 -- common/autotest_common.sh@10 -- $ set +x 00:22:04.553 04:25:17 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:22:04.553 04:25:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:04.553 04:25:17 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:04.553 04:25:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:04.553 04:25:17 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:04.553 04:25:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:04.553 04:25:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:04.553 04:25:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:04.553 04:25:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:04.553 04:25:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:04.553 04:25:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:04.553 + [[ -n 5912 ]] 00:22:04.553 + sudo kill 5912 00:22:04.820 [Pipeline] } 00:22:04.839 [Pipeline] // timeout 00:22:04.846 [Pipeline] } 00:22:04.863 [Pipeline] // stage 00:22:04.870 [Pipeline] } 00:22:04.886 [Pipeline] // catchError 00:22:04.897 [Pipeline] stage 00:22:04.900 [Pipeline] { (Stop VM) 00:22:04.912 [Pipeline] sh 00:22:05.192 + vagrant halt 00:22:08.480 ==> default: Halting domain... 00:22:13.834 [Pipeline] sh 00:22:14.112 + vagrant destroy -f 00:22:17.397 ==> default: Removing domain... 00:22:17.411 [Pipeline] sh 00:22:17.697 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:22:17.706 [Pipeline] } 00:22:17.727 [Pipeline] // stage 00:22:17.734 [Pipeline] } 00:22:17.753 [Pipeline] // dir 00:22:17.760 [Pipeline] } 00:22:17.781 [Pipeline] // wrap 00:22:17.790 [Pipeline] } 00:22:17.808 [Pipeline] // catchError 00:22:17.818 [Pipeline] stage 00:22:17.821 [Pipeline] { (Epilogue) 00:22:17.837 [Pipeline] sh 00:22:18.122 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:23.408 [Pipeline] catchError 00:22:23.409 [Pipeline] { 00:22:23.419 [Pipeline] sh 00:22:23.744 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:23.744 Artifacts sizes are good 00:22:23.752 [Pipeline] } 00:22:23.760 [Pipeline] // catchError 00:22:23.767 [Pipeline] archiveArtifacts 00:22:23.772 Archiving artifacts 00:22:23.882 [Pipeline] cleanWs 00:22:23.893 [WS-CLEANUP] Deleting project workspace... 00:22:23.893 [WS-CLEANUP] Deferred wipeout is used... 00:22:23.898 [WS-CLEANUP] done 00:22:23.900 [Pipeline] } 00:22:23.914 [Pipeline] // stage 00:22:23.917 [Pipeline] } 00:22:23.929 [Pipeline] // node 00:22:23.933 [Pipeline] End of Pipeline 00:22:24.110 Finished: SUCCESS